Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

7.3. In-service Software Update from Red Hat Gluster Storage

Warning

Before you update, be aware of changed requirements that exist after Red Hat Gluster Storage 3.1.3. If you want to access a volume being provided by a Red Hat Gluster Storage 3.1.3 or higher server, your client must also be using Red Hat Gluster Storage 3.1.3 or higher. Accessing volumes from other client versions can result in data becoming unavailable and problems with directory operations. This requirement exists because Red Hat Gluster Storage 3.1.3 contained a number of changes that affect how the Distributed Hash Table works in order to improve directory consistency and remove the effects seen in BZ#1115367 and BZ#1118762.

Important

In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage, updating to 3.1 or higher reloads firewall rules. All runtime-only changes made before the reload are lost.
Before you update, be aware:
  • You must update all Red Hat Gluster Storage servers before updating any clients.
  • If geo-replication is in use, slave nodes must be updated before master nodes.
  • NFS-Ganesha does not support in-service updates. All the running services and I/O operations must be stopped before starting the update process. For more information see, Section 7.2.2, “Updating NFS-Ganesha in the Offline Mode”.
  • Dispersed volumes (volumes that use erasure coding) do not support in-service updates and cannot be updated in a non-disruptive manner.
  • The SMB and CTDB services do not support in-service updates. The procedure outlined in this section involves service interruptions to the SMB and CTDB services.
  • If updating Samba, ensure that Samba is upgraded on all nodes simultaneously, as running different versions of Samba in the same cluster results in data corruption.
  • Your system must be registered to Red Hat Network. For more information refer to Section 2.6, “Subscribing to the Red Hat Gluster Storage Server Channels”
  • Do not perform any volume operations while the cluster is being updated.
To update your system to Red Hat Gluster Storage 3.2.x, follow these steps. The following steps must be performed on each node of the replica pair.

Updating Red Hat Gluster Storage 3.2 in in-service mode

  1. If you have a replicated configuration, perform these steps on all nodes of a replica set.
    If you have a distributed-replicated setup, perform these steps on one replica set at a time, for all replica sets.
  2. Stop any geo-replication sessions.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  3. Verify that there are no pending self-heals:
    # gluster volume heal volname info
    Wait for any self-heal operations to complete before continuing.
  4. Stop the gluster services on the storage server using the following commands:
    # service glusterd stop
    # pkill glusterfs
    # pkill glusterfsd
  5. If you use Samba:
    1. Enable the required repository.
      On Red Hat Enterprise Linux 6.7 or later:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
      On Red Hat Enterprise Linux 7:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
    2. Stop the CTDB and SMB services across all nodes in the Samba cluster using the following command. Stopping the CTDB service also stops the SMB service.
      # service ctdb stop
      This ensures different versions of Samba do not run in the same Samba cluster.
    3. Verify that the CTDB and SMB services are stopped by running the following command:
      ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
  6. Update the server using the following command:
    # yum update
    Wait for the update to complete.
  7. If a kernel update was included as part of the update process in the previous step, reboot the server.
  8. If a reboot of the server was not required, then start the gluster services on the storage server using the following command:
    # service glusterd start
    Additionally, if you use Samba:
    1. Mount /gluster/lock before starting CTDB by executing the following command:
      # mount -a
    2. If the CTDB and SMB services were stopped earlier, then start the services by executing the following command.
      # service ctdb start
    3. To verify if the CTDB and SMB services have started, execute the following command:
      ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
  9. To verify that you have updated to the latest version of the Red Hat Gluster Storage server execute the following command and compare output with the desired version in Section 1.5, “Supported Versions of Red Hat Gluster Storage”.
    # gluster --version
  10. Ensure that all bricks are online. To check the status, execute the following command:
    # gluster volume status
  11. Start self-heal on the volume.
    # gluster volume heal volname
  12. Ensure self-heal is complete on the replica using the following command:
    # gluster volume heal volname info
  13. When all nodes in the volume have been upgraded, run the following command to update the op-version of the cluster. This helps to prevent any compatibility issues within the cluster.
    # gluster volume set all cluster.op-version 31001

    Note

    31001 is the cluster.op-version value for Red Hat Gluster Storage 3.2.0. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correct cluster.op-version value for other versions.
  14. If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
    # mount /var/run/gluster/shared_storage/
    If this command does not work, review the content of /etc/fstab and ensure that the entry for the shared storage is configured correctly and re-run the mount command. The line for the meta volume in the /etc/fstab file should look like the following:
    hostname:/gluster_shared_storage   /var/run/gluster/shared_storage/   glusterfs   defaults   0 0
  15. If you use geo-replication, restart geo-replication sessions when upgrade is complete.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start

    Note

    As a result of BZ#1347625, you may need to use the force parameter to successfully restart in some circumstances.
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force