7.2. Updating Red Hat Gluster Storage in the Offline Mode

Important

  • Offline updates result in server downtime, as volumes are offline during the update process.
  • Complete updates to all Red Hat Gluster Storage servers before updating any clients.

Important

The Web Administration update sequence outlined below is part of the overall Red Hat Gluster Storage offline update. Updating Web Administration as a standalone component is not supported.

Red Hat Gluster Storage Web Administration Update

For Red Hat Gluster Storage Web Administration users intending to update to the latest version, see the Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y section of the Red Hat Gluster Storage Web Administration Quick Start Guide for detailed steps. Follow this sequence to update your Red Hat Gluster Storage system along with the Web Administration environment:
  1. Visit the section Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y and stop all Web Administration services on the Red Hat Gluster Storage servers outlined in step 1 under the heading On Red Hat Gluster Storage Servers (Part 1).
  2. Return to this section 7.2. Updating Red Hat Gluster Storage in the Offline Mode and execute the steps outlined under the heading Updating Red Hat Gluster Storage 3.4 in the offline mode.
  3. Navigate back to the Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y section and perform the steps identified under On Web Administration Server and On Red Hat Gluster Storage Servers (Part II) to complete the Red Hat Gluster Storage and Web Administration update process.

Updating Red Hat Gluster Storage 3.4 in the offline mode

  1. Ensure that you have a working backup, as described in Section 7.1, “Before you update”.
  2. Stop all volumes.
    # for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
  3. Run the following commands on one server at a time.
    1. Stop all gluster services.
      On Red Hat Enterprise Linux 7:
      # systemctl stop glusterd
      # pkill glusterfs
      # pkill glusterfsd
      On Red Hat Enterprise Linux 6:
      # service glusterd stop
      # pkill glusterfs
      # pkill glusterfsd

      Important

      If glusterd crashes, there is no functionality impact to this crash as it occurs during the shutdown. For more information, see Resolving glusterd crash
    2. If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, perform the following additional steps.
      1. Stop and disable CTDB. This ensures that multiple versions of Samba do not run in the cluster during the update process, and avoids data corruption.
        # systemctl stop ctdb
        # systemctl disable ctdb
      2. Verify that the CTDB and NFS services are stopped:
        ps axf | grep -E '(ctdb|nfs)[d]'
      3. Delete the CTDB volume by executing the following command:
        # gluster vol delete <ctdb_vol_name>
    3. Update the system.
      # yum update
      Review the packages to be updated, and enter y to proceed with the update when prompted.
      Wait for the update to complete.
    4. If updates to the kernel package occurred, or if you are migrating from Gluster NFS to NFS Ganesha as part of this update, reboot the system.
    5. Start glusterd.
      On Red Hat Enterprise Linux 7:
      # systemctl start glusterd
      On Red Hat Enterprise Linux 6:
      # service glusterd start
  4. When all servers have been updated, run the following command to update the cluster operating version. This helps to prevent any compatibility issues within the cluster.
    # gluster volume set all cluster.op-version 31305

    Note

    31305 is the cluster.op-version value for Red Hat Gluster Storage 3.4 Batch 3 Update. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correct cluster.op-version value for other versions.
  5. If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and use the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.4 Administration Guide to configure the NFS Ganesha cluster.
  6. Start all volumes.
    # for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
  7. If you did not reboot as part of the update process, run the following command to remount the meta volume:
    # mount /var/run/gluster/shared_storage/
    If this command does not work, review the content of /etc/fstab and ensure that the entry for the shared storage is configured correctly and re-run the mount command. The line for the meta volume in the /etc/fstab file should look like the following:
    hostname:/gluster_shared_storage   /var/run/gluster/shared_storage/   glusterfs   defaults   0 0
  8. If you use Gluster NFS to access volumes, enable Gluster NFS using the following command:
    # gluster volume set volname nfs.disable off
    For example:
    # gluster volume set testvol nfs.disable off
    volume set: success
  9. If you use geo-replication, restart geo-replication sessions when upgrade is complete.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
    You may need to append the force parameter to successfully restart in some circumstances. See BZ#1347625 for details.

Note

If you are updating your Web Administration environment, after executing step 9, navigate to the Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y section and perform the steps identified under On Web Administration Server and On Red Hat Gluster Storage Servers (Part II) to complete the Red Hat Gluster Storage and Web Administration update process.