Chapter 7. Updating Red Hat Gluster Storage from 3.2.x to 3.2.y
- Updating from Red Hat Enterprise Linux 6 based Red Hat Gluster Storage to Red Hat Enterprise Linux 7 based Red Hat Gluster Storage is not supported.
- Asynchronous errata update releases of Red Hat Gluster Storage include all fixes that were released asynchronously since the last release as a cumulative update.
- When there are large number of snapshots, ensure to deactivate the snapshots before performing an update. The snapshots can be activated after the update is complete. For more information, see Chapter 4.1 Starting and Stopping the glusterd service in the Red Hat Gluster Storage 3 Administration Guide.
7.1. Updating Red Hat Gluster Storage in the Offline Mode
- Offline updates result in server downtime, as volumes are offline during upgrade.
- You must update all Red Hat Gluster Storage servers before updating any clients.
- This process assumes that you are updating to a thinly provisioned volume.
Updating Red Hat Gluster Storage 3.2 in the offline mode
- Make a complete backup using a reliable backup solution. This Knowledge Base solution covers one possible approach: https://access.redhat.com/solutions/1484053.If you use an alternative backup solution:
- Ensure that you have sufficient space available for a complete backup.
- Copy the
.glusterfsdirectory before copying any data files.
- Ensure that no new files are created on Red Hat Gluster Storage file systems during the backup.
- Ensure that all extended attributes, ACLs, owners, groups, and symbolic and hard links are backed up.
- Check that the backup restores correctly before you continue with the migration.
- Delete the existing Logical Volume (LV) and recreate a new thinly provisioned LV. For more information, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/thinprovisioned_volumes.html
- Restore backed up content to the newly created thinly provisioned LV.
- When you are certain that your backup works, stop all volumes.
# gluster volume stop volname
- Run the following commands to stop gluster services and update Red Hat Gluster Storage in the offline mode:On Red Hat Enterprise Linux 6:
# service glusterd stop # pkill glusterfs # pkill glusterfsd # yum updateOn Red Hat Enterprise Linux 7:
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd # yum updateWait for the update to complete.
- Start glusterd.On Red Hat Enterprise Linux 6:
# service glusterd startOn Red Hat Enterprise Linux 7:
# systemctl start glusterd
- When all nodes have been updated, run the following command to update the
op-versionof the cluster. This helps to prevent any compatibility issues within the cluster.
# gluster volume set all cluster.op-version 31001
cluster.op-versionvalue for Red Hat Gluster Storage 3.2.0. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correct
cluster.op-versionvalue for other versions.
- Start your volumes with the following command:
# gluster volume start volname
- If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
# mount /var/run/gluster/shared_storage/If this command does not work, review the content of
/etc/fstaband ensure that the entry for the shared storage is configured correctly and re-run the
mountcommand. The line for the meta volume in the
/etc/fstabfile should look like the following:
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- If using NFS to access volumes, enable gluster-NFS using the following command:
# gluster volume set volname nfs.disable offFor example:
# gluster volume set testvol nfs.disable off volume set: success
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
ImportantIn Red Hat Gluster Storage 3.1 and higher, a meta volume is recommended when geo-replication is configured. However, when upgrading geo-replicated Red Hat Gluster Storage from version 3.0.x to 3.1.y, the older geo-replicated configuration that did not use shared volumes was persisted to the upgraded installation. Red Hat recommends reconfiguring geo-replication following upgrade to Red Hat Gluster Storage 3.2 to ensure that shared volumes are used and a meta volume is configured.To enable shared volumes, set the
enablefrom the master node:
# gluster volume set all cluster.enable-shared-storage enableThen configure geo-replication to use shared volumes as a meta volume by setting
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume trueFor further information see the following sections in the Red Hat Gluster Storage 3.2 Administration Guide:
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
NoteAs a result of BZ#1347625, you may need to use the
forceparameter to successfully restart in some circumstances.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force