-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Gluster Storage
7.2. Updating Red Hat Gluster Storage in the Offline Mode
Important
- Offline updates result in server downtime, as volumes are offline during the update process.
- Complete updates to all Red Hat Gluster Storage servers before updating any clients.
Updating Red Hat Gluster Storage 3.3 in the offline mode
- Ensure that you have a working backup, as described in Section 7.1, “Before you update”.
- Stop all volumes.
# for vol in `gluster volume list`; do gluster --mode=script volume stop $vol; sleep 2s; done
- Run the following commands on one server at a time.
- Stop all gluster services.On Red Hat Enterprise Linux 7:
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd
On Red Hat Enterprise Linux 6:# service glusterd stop # pkill glusterfs # pkill glusterfsd
- If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, perform the following additional steps.
- Stop and disable CTDB. This ensures that multiple versions of Samba do not run in the cluster during the update process, and avoids data corruption.
# systemctl stop ctdb # systemctl disable ctdb
- Verify that the CTDB and NFS services are stopped:
ps axf | grep -E '(ctdb|nfs)[d]'
- Delete the CTDB volume by executing the following command:
# gluster vol delete <ctdb_vol_name>
- Update the system.
# yum update
Review the packages to be updated, and entery
to proceed with the update when prompted.Wait for the update to complete. - If updates to the kernel package occurred, or if you are migrating from Gluster NFS to NFS Ganesha as part of this update, reboot the system.
- Start glusterd.On Red Hat Enterprise Linux 7:
# systemctl start glusterd
On Red Hat Enterprise Linux 6:# service glusterd start
- When all servers have been updated, run the following command to update the cluster operating version. This helps to prevent any compatibility issues within the cluster.
# gluster volume set all cluster.op-version 31102
Note
31102
is thecluster.op-version
value for the latest Red Hat Gluster Storage 3.3.1 glusterfs Async. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - If you want to migrate from Gluster NFS to NFS Ganesha as part of this update, install the NFS-Ganesha packages as described in Chapter 4, Deploying NFS-Ganesha on Red Hat Gluster Storage, and use the information in the NFS Ganesha section of the Red Hat Gluster Storage 3.3 Administration Guide to configure the NFS Ganesha cluster.
- Start all volumes.
# for vol in `gluster volume list`; do gluster --mode=script volume start $vol; done
- If you did not reboot as part of the update process, run the following command to remount the meta volume:
# mount /var/run/gluster/shared_storage/
If this command does not work, review the content of/etc/fstab
and ensure that the entry for the shared storage is configured correctly and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- If you use Gluster NFS to access volumes, enable Gluster NFS using the following command:
# gluster volume set volname nfs.disable off
For example:# gluster volume set testvol nfs.disable off volume set: success
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
You may need to append theforce
parameter to successfully restart in some circumstances. See BZ#1347625 for details.