Chapter 5. Upgrading to Red Hat Gluster Storage 3.5

This chapter describes the procedure to upgrade RHGS 3.5 with same RHEL platform major version. To upgrade from RHEL 6 based RHGS to RHEL 7 based RHGS, refer to Upgrading Red Hat Gluster Storage

Important

Red Hat Enterprise Linux 8 (RHEL 8) is supported only for new installations of Red Hat Gluster Storage 3.5.2 and above.
Upgrades to RHEL 8 based Red Hat Gluster Storage are supported from Red Hat Gluster Storage 3.5.2 onwards. Upgrade is not supported from RHEL 7 based RHGS cluster to RHEL 8 based RHGS cluster.

Upgrade support limitations

5.1. Offline Upgrade to Red Hat Gluster Storage 3.5

Warning

Before you upgrade, be aware of changed requirements that exist after Red Hat Gluster Storage 3.1.3. If you want to access a volume being provided by a Red Hat Gluster Storage 3.1.3 or higher server, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing volumes from other client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 contained a number of changes that affect how the Distributed Hash Table works in order to improve directory consistency and remove the effects seen in BZ#1115367 and BZ#1118762.

Important

In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage 3.1 and higher, updating reloads firewall rules. All runtime-only changes made before the reload are lost, so ensure that any changes you want to keep are made persistently.

5.1.1. Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Subscription Manager

Procedure 5.1. Before you upgrade

  1. Back up the following configuration directory and files in a location that is not on the operating system partition.
    • /var/lib/glusterd
    • /etc/samba
    • /etc/ctdb
    • /etc/glusterfs
    • /var/lib/samba
    • /var/lib/ctdb
    • /var/run/gluster/shared_storage/nfs-ganesha

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    If you use NFS-Ganesha, back up the following files from all nodes:
    • /run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf
    • /etc/ganesha/ganesha.conf
    • /etc/ganesha/ganesha-ha.conf
    If upgrading from Red Hat Gluster Storage 3.3 to 3.4 or subsequent releases, back up all xattr by executing the following command individually on the brick root(s) for all nodes:
    # find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_name
  2. Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
    # umount mount-point
  3. If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
    # gluster nfs-ganesha disable
  4. On a gluster server, disable the shared volume.
    # gluster volume set all cluster.enable-shared-storage disable
  5. Stop all volumes
    # for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
  6. Verify that all volumes are stopped.
    # gluster volume info
  7. Stop the glusterd services on all servers using the following command:
    # service glusterd stop
    # pkill glusterfs
    # pkill glusterfsd
  8. Stop the pcsd service.
    # systemctl stop pcsd

Procedure 5.2. Upgrade using yum

Note

Verify that your system is not on the legacy Red Hat Network Classic update system.
# migrate-rhs-classic-to-rhsm --status
If you are still on Red Hat Network Classic, run the following command to migrate to Red Hat Subscription Manager.
# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
Then verify that your status has changed.
# migrate-rhs-classic-to-rhsm --status
  1. If you use Samba:
    1. For Red Hat Enterprise Linux 6.7 or higher, enable the following repository:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
      For Red Hat Enterprise Linux 7, enable the following repository:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
      For Red Hat Enterprise Linux 8, enable the following repository:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-8-x86_64-rpms
    2. Ensure that Samba is upgraded on all the nodes simultaneously, as running different versions of Samba in the same cluster will lead to data corruption.
      Stop the CTDB and SMB services.
      On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
      # systemctl stop ctdb
      On Red Hat Enterprise Linux 6:
      # service ctdb stop
      To verify that services are stopped, run:
      # ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
  2. Upgrade the server to Red Hat Gluster Storage 3.5.
    # yum update
    Wait for the update to complete.

    Important

    Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:
    # yum install nfs-ganesha-selinux
    Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:
     # dnf install glusterfs-ganesha
  3. If you use Samba/CTDB, update the following files to replace META="all" with META="<ctdb_volume_name>", for example, META="ctdb":
    • /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh - This script ensures the file system and its lock volume are mounted on all Red Hat Gluster Storage servers that use Samba, and ensures that CTDB starts at system boot.
    • /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh - This script ensures that the file system and its lock volume are unmounted when the CTDB volume is stopped.

      Note

      For RHEL based Red Hat Gluster Storage upgrading to 3.5 batch update 4 with Samba, the write-behind translator has to manually disabled for all existing samba volumes.
      # gluster volume set <volname> performance.write-behind off
  4. Reboot the server to ensure that kernel updates are applied.
  5. Ensure that glusterd and pcsd services are started.
    # systemctl start glusterd
    # systemctl start pcsd

    Note

    During upgrade of servers, the glustershd.log file throws some “Invalid argument” errors during every index crawl (10 mins by default) on the upgraded nodes. It is *EXPECTED* and can be *IGNORED* until the op-version bump up, after which these errors are not triggered. Sample error message:
    [2021-05-25 17:58:38.007134] E [MSGID: 114031] [client-rpc-fops_v2.c:216:client4_0_mkdir_cbk] 0-spvol-client-40: remote operation failed. Path: (null) [Invalid argument]
    If you are in op-version '70000' or lower, do not bump up the op-version to '70100' or higher until all the servers and clients are upgraded to the newer version.
  6. When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
    # gluster volume set all cluster.op-version 70200

    Note

    70200 is the cluster.op-version value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:
    gluster volume heal $VOLNAME granular-entry-heal enable
    The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correct cluster.op-version value for other versions.

    Important

    If the op-version is bumped up to '70100' after upgrading the servers and before upgrading the clients, some internal metadata files under the root of the mount point named '.glusterfs-anonymous-inode-(gfid)' exposed to the older clients. The clients must not do any I/O or remove or touch contents in this directory. The clients must upgrade to 3.5.4 or higher version, then this directory becomes invisible to the clients.
  7. If you want to migrate from Gluster NFS to NFS Ganesha as part of this upgrade, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and configure the NFS Ganesha cluster using the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.5 Administration Guide.
  8. Start all volumes.
    # for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
  9. If you are using NFS-Ganesha:
    1. Copy the volume's export information from your backup copy of ganesha.conf to the new /etc/ganesha/ganesha.conf file.
      The export information in the backed up file is similar to the following:
      %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf"
      %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf"
      %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"

      Note

      With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    2. Copy the backup volume export files from the backup directory to /etc/ganesha/exports by running the following command from the backup directory:
      # cp export.* /etc/ganesha/exports/
  10. Enable the shared volume.
    # gluster volume set all cluster.enable-shared-storage enable
  11. Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
    # mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
  12. Ensure that the /var/run/gluster/shared_storage/nfs-ganesha directory is created.
    # cd /var/run/gluster/shared_storage/
    # mkdir nfs-ganesha
  13. Enable firewall settings for new services and ports. See Getting Started in the Red Hat Gluster Storage 3.5 Administration Guide.
  14. If you use Samba/CTDB:
    1. Mount /gluster/lock before starting CTDB by executing the following commands:
      # mount <ctdb_volume_name>
      # mount -t glusterfs server:/ctdb_volume_name /gluster/lock/
    2. Verify that the lock volume mounted correctly by checking for lock in the output of the mount command on any Samba server.
      # mount | grep 'lock'
      ...
      <hostname>:/<ctdb_volume_name>.tcp on /gluster/lock type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
    3. If all servers that host volumes accessed via SMB have been updated, then start the CTDB and Samba services by executing the following commands.
      On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
      # systemctl start ctdb
      On Red Hat Enterprise Linux 6:
      # service ctdb start
    4. To verify that the CTDB and SMB services have started, execute the following command:
      ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
  15. If you use NFS-Ganesha:
    1. Copy the ganesha.conf and ganesha-ha.conf files, and the /etc/ganesha/exports directory to the /var/run/gluster/shared_storage/nfs-ganesha directory.
      # cd /etc/ganesha/
      # cp ganesha.conf  ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/
      # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
    2. Update the path of any export entries in the ganesha.conf file.
      # sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
    3. Run the following to clean up any existing cluster configuration:
      /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
    4. If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Boolean:
      # setsebool -P ganesha_use_fusefs on
    5. Start the nfs-ganesha service and verify that all nodes are functional.
      # gluster nfs-ganesha enable
    6. Enable NFS-Ganesha on all volumes.
      # gluster volume set volname ganesha.enable on

5.1.2. Upgrading to Red Hat Gluster Storage 3.5 for Systems Subscribed to Red Hat Network Satellite Server

Procedure 5.3. Before you upgrade

  1. Back up the following configuration directory and files in a location that is not on the operating system partition.
    • /var/lib/glusterd
    • /etc/samba
    • /etc/ctdb
    • /etc/glusterfs
    • /var/lib/samba
    • /var/lib/ctdb
    • /var/run/gluster/shared_storage/nfs-ganesha

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    If you use NFS-Ganesha, back up the following files from all nodes:
    • /run/gluster/shared_storage/nfs-ganesha/exports/export.*.conf
    • /etc/ganesha/ganesha.conf
    • /etc/ganesha/ganesha-ha.conf
    If upgrading from Red Hat Gluster Storage 3.3 to 3.4 or subsequent releases, back up all xattr by executing the following command individually on the brick root(s) for all nodes:
    # find ./ -type d ! -path "./.*" ! -path "./" | xargs getfattr -d -m. -e hex > /var/log/glusterfs/xattr_dump_brick_name
  2. Unmount gluster volumes from all clients. On a client, use the following command to unmount a volume from a mount point.
    # umount mount-point
  3. If you use NFS-Ganesha, run the following on a gluster server to disable the nfs-ganesha service:
    # gluster nfs-ganesha disable
  4. On a gluster server, disable the shared volume.
    # gluster volume set all cluster.enable-shared-storage disable
  5. Stop all volumes.
    # for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
  6. Verify that all volumes are stopped.
    # gluster volume info
  7. Stop the glusterd services on all servers using the following command:
    # service glusterd stop
    # pkill glusterfs
    # pkill glusterfsd
  8. Stop the pcsd service.
    # systemctl stop pcsd

Procedure 5.4. Upgrade using Satellite

  1. Create an Activation Key at the Red Hat Network Satellite Server, and associate it with the following channels. For more information, see Section 2.6, “Installing from Red Hat Satellite Server”
    • For Red Hat Enterprise Linux 6.7 or higher:
      Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64)
      
      Child channels:
      RHEL Server Scalable File System (v. 6 for x86_64)
      Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
      If you use Samba, add the following channel:
      Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
    • For Red Hat Enterprise Linux 7:
      Base Channel: Red Hat Enterprise Linux Server (v.7 for 64-bit x86_64)
      
      Child channels:
      RHEL Server Scalable File System (v. 7 for x86_64)
      Red Hat Gluster Storage Server 3 (RHEL 7 for x86_64)
      If you use Samba, add the following channel:
      Red Hat Gluster 3 Samba (RHEL 7 for x86_64)
  2. Unregister your system from Red Hat Network Satellite by following these steps:
    1. Log in to the Red Hat Network Satellite server.
    2. Click on the Systems tab in the top navigation bar and then the name of the old or duplicated system in the System List.
    3. Click the delete system link in the top-right corner of the page.
    4. To confirm the system profile deletion by clicking the Delete System button.
  3. Run the following command on your Red Hat Gluster Storage server, using your credentials and the Activation Key you prepared earlier. This re-registers the system to the Red Hat Gluster Storage 3.5 channels on the Red Hat Network Satellite Server.
    # rhnreg_ks --username username --password password --force --activationkey Activation Key ID
  4. Verify that the channel subscriptions have been updated.
    On Red Hat Enterprise Linux 6.7 and higher, look for the following channels, as well as the rh-gluster-3-samba-for-rhel-6-server-rpms channel if you use Samba.
    # rhn-channel --list
    rhel-6-server-rpms
    rhel-scalefs-for-rhel-6-server-rpms
    rhs-3-for-rhel-6-server-rpms
    On Red Hat Enterprise Linux 7, look for the following channels, as well as the rh-gluster-3-samba-for-rhel-7-server-rpms channel if you use Samba.
    # rhn-channel --list
    rhel-7-server-rpms
    rh-gluster-3-for-rhel-7-server-rpms
  5. Upgrade to Red Hat Gluster Storage 3.5.
    # yum update

    Important

    Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 7:
    # yum install nfs-ganesha-selinux
    Run the following command to install nfs-ganesha-selinux on Red Hat Enterprise Linux 8:
     # dnf install glusterfs-ganesha
  6. Reboot the server and run volume and data integrity checks.
  7. When all nodes have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
    # gluster volume set all cluster.op-version 70200

    Note

    70200 is the cluster.op-version value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:
    gluster volume heal $VOLNAME granular-entry-heal enable
    The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correct cluster.op-version value for other versions.
  8. Start all volumes.
    # for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
  9. If you are using NFS-Ganesha:
    1. Copy the volume's export information from your backup copy of ganesha.conf to the new /etc/ganesha/ganesha.conf file.
      The export information in the backed up file is similar to the following:
      %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v1.conf"
      %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v2.conf"
      %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.v3.conf"

      Note

      With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    2. Copy the backup volume export files from the backup directory to /etc/ganesha/exports by running the following command from the backup directory:
      # cp export.* /etc/ganesha/exports/
  10. Enable the shared volume.
    # gluster volume set all cluster.enable-shared-storage enable
  11. Ensure that the shared storage volume is mounted on the server. If the volume is not mounted, run the following command:
    # mount -t glusterfs hostname:gluster_shared_storage /var/run/gluster/shared_storage
  12. Ensure that the /var/run/gluster/shared_storage/nfs-ganesha directory is created.
    # cd /var/run/gluster/shared_storage/
    # mkdir nfs-ganesha
  13. Enable firewall settings for new services and ports. See Getting Started in the Red Hat Gluster Storage 3.5 Administration Guide.
  14. If you use Samba/CTDB:
    1. Mount /gluster/lock before starting CTDB by executing the following commands:
      # mount <ctdb_volume_name>
      # mount -t glusterfs server:/ctdb_volume_name /gluster/lock/
    2. Verify that the lock volume mounted correctly by checking for lock in the output of the mount command on any Samba server.
      # mount | grep 'lock'
      ...
      <hostname>:/<ctdb_volume_name>.tcp on /gluster/lock type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
    3. If all servers that host volumes accessed via SMB have been updated, then start the CTDB and Samba services by executing the following commands.
      On Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8:
      # systemctl start ctdb
      On Red Hat Enterprise Linux 6:
      # service ctdb start
    4. To verify that the CTDB and SMB services have started, execute the following command:
      ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
  15. If you use NFS-Ganesha:
    1. Copy the ganesha.conf and ganesha-ha.conf files, and the /etc/ganesha/exports directory to the /var/run/gluster/shared_storage/nfs-ganesha directory.
      # cd /etc/ganesha/
      # cp ganesha.conf  ganesha-ha.conf /var/run/gluster/shared_storage/nfs-ganesha/
      # cp -r exports/ /var/run/gluster/shared_storage/nfs-ganesha/
    2. Update the path of any export entries in the ganesha.conf file.
      # sed -i 's/\/etc\/ganesha/\/var\/run\/gluster\/shared_storage\/nfs-ganesha/' /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf
    3. Run the following to clean up any existing cluster configuration:
      /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
    4. If you have upgraded to Red Hat Enterprise Linux 7.4 or later, set the following SELinux Boolean:
      # setsebool -P ganesha_use_fusefs on
  16. Start the ctdb service (and nfs-ganesha service, if used) and verify that all nodes are functional.
    # systemctl start ctdb
    # gluster nfs-ganesha enable
  17. If this deployment uses NFS-Ganesha, enable NFS-Ganesha on all volumes.
    # gluster volume set volname ganesha.enable on

5.1.3. Special consideration for Offline Software Upgrade

5.1.3.1. Migrate CTDB configuration files

If you are upgrading CTDB to 4.9.x and beyond from an older version of CTDB, perform the following steps post the upgrade:
  1. Make a temporary directory to migrate configuration files.
    # mkdir /tmp/ctdb-migration
  2. Run the CTDB configuration migration script.
    # ./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb
    The script assumes that the CTDB configuration directory is /etc/ctdb. If this is not correct for your setup, specify an alternative configuration directory with the -d option, for example:
    # ./usr/share/doc/ctdb-4.9.x/examples/config_migrate.sh -o /tmp/ctdb-migration /etc/sysconfig/ctdb -d ctdb-config-dir
  3. Verify that the /tmp/ctdb-migration directory now contains the following files:
    • commands.sh
    • ctdb.conf
    • script.options
    • ctdb.tunables (if additional changes are required)
    • ctdb.sysconfig (if additional changes are required)
    • README.warn (if additional changes are required)
  4. Back up the current configuration files.
    # mv /etc/ctdb/ctdb.conf /etc/ctdb/ctdb.conf.default
  5. Install the new configuration files.
    # mv /tmp/ctdb-migration/ctdb.conf /etc/ctdb/ctdb.conf
    # mv script.options /etc/ctdb/
  6. Make the commands.sh file executable, and run it.
      # chmod +x /tmp/ctdb-migration/commands.sh
    # ./tmp/ctdb-migration/commands.sh
  7. If /tmp/ctdb-migration/ctdb.tunables exists, move it to the /etc/ctdb directory.
    # cp /tmp/ctdb-migration/ctdb.tunables /etc/ctdb
  8. If /tmp/ctdb-migration/ctdb.sysconfig exists, back up the old /etc/sysconfig/ctdb file and replace it with /tmp/ctdb-migration/ctdb.sysconfig.
    # mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old
    # mv /tmp/ctdb-migration/ctdb.sysconfig /etc/sysconfig/ctdb
    Otherwise, back up the old /etc/sysconfig/ctdb file and replace it with /etc/sysconfig/ctdb.rpmnew.
    # mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.old
    # mv /etc/sysconfig/ctdb.rpmnew /etc/sysconfig/ctdb