Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

7.2. Updating the NFS Server

Depending on the environment, the NFS server can be updated in the following ways:
  • Updating Gluster NFS
  • Updating NFS-Ganesha in the Offline Mode
  • Migrating from Gluster NFS to NFS Ganesha in Offline mode
More detailed information about each is provided in the following sections.

7.2.1. Updating Gluster NFS

7.2.2. Updating NFS-Ganesha in the Offline Mode

Note

NFS-Ganesha does not support in-service updates. This means all running services and I/O operations must be stopped before starting the update process.
Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.2:
  1. Back up all the volume export files under /etc/ganesha/exports and ganesha.conf under /etc/ganesha, in a backup directory on all the nodes:
    From Red Hat Gluster Storage 3.1.x to Red Hat Gluster Storage 3.2

    For example:

    # cp /etc/ganesha/exports/export.v1.conf backup/
    # cp /etc/ganesha/exports/export.v2.conf backup/
    # cp /etc/ganesha/exports/export.v3.conf backup/
    # cp /etc/ganesha/exports/export.v4.conf backup/
    # cp /etc/ganesha/exports/export.v5.conf backup/
    # cp /etc/ganesha/ganesha.conf backup/
    # cp /etc/ganesha/ganesha-ha.conf backup/
    From Red Hat Gluster Storage 3.2 to Red Hat Gluster Storage 3.2 Async

    For example:

    # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf backup/
    # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf backup/
    # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf backup/
    # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v4.conf backup/
    # cp /var/run/gluster/shared_storage/nfs-ganesha/exports/export.v5.conf backup/
    # cp /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf backup/
    # cp /var/run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf backup/
  2. Disable nfs-ganesha on the cluster by executing the following command:
    # gluster nfs-ganesha disable
    For example:
    # gluster nfs-ganesha disable
    This will take a few minutes to complete. Please wait ..
    nfs-ganesha : success
    
  3. Disable the shared volume in cluster by executing the following command:
    # gluster volume set all cluster.enable-shared-storage disable
    For example:
    # gluster volume set all cluster.enable-shared-storage disable
    Disabling cluster.enable-shared-storage will delete the shared storage volumec(gluster_shared_storage), which is used by snapshot scheduler, geo-replication and NFS-Ganesha.
    Do you still want to continue? (y/n) y
    volume set: success
  4. Stop the glusterd service and kill any running gluster process on all the nodes.
    On Red Hat Enterprise Linux 7:
    # systemctl stop glusterd
    # pkill glusterfs
    # pkill glusterfsd
    On Red Hat Enterprise Linux 6:
    # service glusterd stop
    # pkill glusterfs
    # pkill glusterfsd
  5. Ensure all gluster processes are stopped and if there are any gluster processes still running, terminate the process using kill, on all the nodes by executing the following command:
    # pgrep gluster
  6. Stop the pcsd service on all nodes of the cluster.
    On Red Hat Enterprise Linux 7:
    # systemctl stop pcsd
    On Red Hat Enterprise Linux 6:
    # service pcsd stop
  7. Update the packages on all the nodes by executing the following command:
    # yum update
    This updates the required packages and any dependencies of those packages.

    Important

    • From Red Hat Gluster Storage 3.2, NFS-Ganesha packages must be installed on all the nodes of the trusted storage pool.
    • Verify on all the nodes that the required packages are updated, the nodes are fully functional and are using the correct versions. If anything does not seem correct, then do not proceed until the situation is resolved. Contact the Red Hat Global Support Services for assistance if needed.
  8. Start the glusterd and pcsd service on all the nodes by executing the following commands.
    On Red Hat Enterprise Linux 7:
    # systemctl start glusterd
    # systemctl start pcsd
    On Red Hat Enterprise Linux 6:
    # service glusterd start
    # service pcsd start
  9. When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
    # gluster volume set all cluster.op-version 31001
    1. Copy the volume's export information from backup copy of ganesha.conf to the newly renamed ganesha.conf under /etc/ganesha.
      Export entries will look like as below in backup copy of ganesha.conf :
      %include "/etc/ganesha/exports/export.v1.conf"
      %include "/etc/ganesha/exports/export.v2.conf"
      %include "/etc/ganesha/exports/export.v3.conf"
      %include "/etc/ganesha/exports/export.v4.conf"
      %include "/etc/ganesha/exports/export.v5.conf"
      
    2. Copy the backup volume export files from backup directory to /etc/ganesha/exports
      # cp export.* /etc/ganesha/exports/
  10. Enable the firewall settings for the new services and ports. Information on how to enable the services is available in the Red Hat Gluster Storage Administration Guide.
  11. Enable the shared volume in the cluster:
    # gluster volume set all cluster.enable-shared-storage enable
    For example:
    # gluster volume set all cluster.enable-shared-storage enable
            volume set: success
  12. Ensure that the shared storage volume mount exists on the server after node reboot/shutdown. If it does not, then mount the shared storage volume manually using the following command:
    # mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage
  13. Once the shared volume is created, create a folder named “nfs-ganesha” inside /var/run/gluster/shared_storage:
    # cd /var/run/gluster/shared_storage/
    # mkdir nfs-ganesha
  14. Copy the ganesha.conf, ganesha-ha.conf, and the exports folder from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha
    # cd /etc/ganesha/
    # cp ganesha.conf  ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/
    # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
  15. If there are any export entries in the ganesha.conf file, then update the path in the file using the following command:
    # sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
  16. Execute the following command to cleanup any already existing cluster related configuration:
    /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
  17. If you have upgraded to Red Hat Enterprise Linux 7.4, enable the ganesha_use_fusefs and the gluster_use_execmem boolean before enabling NFS-Ganesha by executing the following commands:
    # setsebool -P ganesha_use_fusefs on
    # setsebool -P gluster_use_execmem on
  18. Enable nfs-ganesha on the cluster:
    # gluster nfs-ganesha enable
    For example:
    # gluster nfs-ganesha enable
    Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue?
     (y/n) y
    This will take a few minutes to complete. Please wait ..
    nfs-ganesha : success

Important

Verify that all the nodes are functional. If anything does not seem correct, then do not proceed until the situation is resolved. Contact Red Hat Global Support Services for assistance if required.

7.2.3. Migrating from Gluster NFS to NFS Ganesha in Offline mode

The following steps have to be performed on each node of the replica pair to migrate from Gluster NFS to NFS Ganesha
  1. To ensure that CTDB does not start automatically after a reboot run the following command on each node of the CTDB cluster:
    # chkconfig ctdb off
  2. Stop the CTDB service on the Red Hat Gluster Storage node using the following command on each node of the CTDB cluster:
    # service ctdb stop
  3. To verify if the CTDB and NFS services are stopped, execute the following command:
    ps axf | grep -E '(ctdb|nfs)[d]'
  4. Stop the gluster services on the storage server using the following commands:
    # service glusterd stop
    # pkill glusterfs
    # pkill glusterfsd
  5. Delete the CTDB volume by executing the following command:
    # gluster vol delete <ctdb_vol_name>
  6. Update the server using the following command:
    # yum update
  7. Reboot the server
  8. Start the glusterd service using the following command:
    # service glusterd start
    On Red Hat Enterprise Linux 7, execute the following command:
    # systemctl start glusterd
  9. When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
    # gluster volume set all cluster.op-version 31001
  10. To install nfs-ganesha packages, refer Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage
  11. To configure nfs-ganesha cluster, refer section NFS-Ganesha in the Red Hat Gluster Storage Administration Guide.