7.2. Updating Red Hat Gluster Storage in the Offline Mode
- Offline updates result in server downtime, as volumes are offline during the update process.
- Complete updates to all Red Hat Gluster Storage servers before updating any clients.
Red Hat Gluster Storage Web Administration Update
- Visit the section Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y and stop all Web Administration services on the Red Hat Gluster Storage servers outlined in step 1 under the heading On Red Hat Gluster Storage Servers (Part 1).
- Return to this section 7.2. Updating Red Hat Gluster Storage in the Offline Mode and execute the steps outlined under the heading Updating Red Hat Gluster Storage 3.4 in the offline mode.
- Navigate back to the Red Hat Gluster Storage Web Administration 3.4.x to 3.4.y section and perform the steps identified under On Web Administration Server and On Red Hat Gluster Storage Servers (Part II) to complete the Red Hat Gluster Storage and Web Administration update process.
Updating Red Hat Gluster Storage 3.4 in the offline mode
- Ensure that you have a working backup, as described in Section 7.1, “Before you update”.
- Stop all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
- Run the following commands on one server at a time.
- Stop all gluster services.On Red Hat Enterprise Linux 7:
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsdOn Red Hat Enterprise Linux 6:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
glusterdcrashes, there is no functionality impact to this crash as it occurs during the shutdown. For more information, see Resolving
- If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, perform the following additional steps.
- Stop and disable CTDB. This ensures that multiple versions of Samba do not run in the cluster during the update process, and avoids data corruption.
# systemctl stop ctdb # systemctl disable ctdb
- Verify that the CTDB and NFS services are stopped:
ps axf | grep -E '(ctdb|nfs)[d]'
- Delete the CTDB volume by executing the following command:
# gluster vol delete <ctdb_vol_name>
- Update the system.
# yum updateReview the packages to be updated, and enter
yto proceed with the update when prompted.Wait for the update to complete.
- If updates to the kernel package occurred, or if you are migrating from Gluster NFS to NFS Ganesha as part of this update, reboot the system.
- Start glusterd.On Red Hat Enterprise Linux 7:
# systemctl start glusterdOn Red Hat Enterprise Linux 6:
# service glusterd start
- When all servers have been updated, run the following command to update the cluster operating version. This helps to prevent any compatibility issues within the cluster.
# gluster volume set all cluster.op-version 31305
cluster.op-versionvalue for Red Hat Gluster Storage 3.4 Batch 3 Update. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correct
cluster.op-versionvalue for other versions.
- If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and use the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.4 Administration Guide to configure the NFS Ganesha cluster.
- Start all volumes.
for vol in `gluster volume list`; do gluster --mode=script volume start $vol; sleep 2s; done
- If you did not reboot as part of the update process, run the following command to remount the meta volume:
# mount /var/run/gluster/shared_storage/If this command does not work, review the content of
/etc/fstaband ensure that the entry for the shared storage is configured correctly and re-run the
mountcommand. The line for the meta volume in the
/etc/fstabfile should look like the following:
hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- If you use Gluster NFS to access volumes, enable Gluster NFS using the following command:
# gluster volume set volname nfs.disable offFor example:
# gluster volume set testvol nfs.disable off volume set: success
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL startYou may need to append the
forceparameter to successfully restart in some circumstances. See BZ#1347625 for details.