-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Gluster Storage
7.3. In-service Software Update from Red Hat Gluster Storage
Important
In Red Hat Enterprise Linux 7 based Red Hat Gluster Storage, updating to 3.1 or higher reloads firewall rules. All runtime-only changes made before the reload are lost.
Important
The SMB and CTDB services do not support in-service updates. The procedure outlined in this section involves service interruptions to the SMB and CTDB services.
Before you update, be aware:
- Complete updates to all Red Hat Gluster Storage servers before updating any clients.
- If geo-replication is in use, complete updates to all slave nodes before updating master nodes.
- Erasure coded (dispersed) volumes can be updated while in-service only if the
disperse.optimistic-change-log
anddisperse.eager-lock
options are set tooff
. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations. - If updating Samba, ensure that Samba is upgraded on all nodes simultaneously, as running different versions of Samba in the same cluster results in data corruption.
- Your system must be registered to Red Hat Network in order to receive updates. For more information, see Section 2.6, “Subscribing to the Red Hat Gluster Storage Server Channels”
- Do not perform any volume operations while the cluster is being updated.
Updating Red Hat Gluster Storage 3.3 in in-service mode
- Ensure that you have a working backup, as described in Section 7.1, “Before you update”.
- If you have a replicated configuration, perform these steps on all nodes of a replica set.If you have a distributed-replicated configuration, perform these steps on one replica set at a time, for all replica sets.
- Stop any geo-replication sessions.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
- If this node is part of an NFS-Ganehsa cluster, place the node in standby mode.
# pcs cluster standby
- Verify that there are no pending self-heals:
# gluster volume heal volname info
Wait for any self-heal operations to complete before continuing. - If this node is part of an NFS-Ganesha cluster:
- Disable the PCS cluster and verify that it has stopped.
# pcs cluster disable # pcs status
- Stop the nfs-ganesha service.
# systemctl stop nfs-ganesha
- If you need to update an erasure coded (dispersed) volume, set the
disperse.optimistic-change-log
anddisperse.eager-lock
options tooff
. Wait for at least two minutes after disabling these options before attempting to upgrade to ensure that these configuration changes take effect for I/O operations.# gluster volume set volname disperse.optimistic-change-log off # gluster volume set volname disperse.eager-lock off
- Stop the gluster services on the storage server using the following commands:On Red Hat Enterprise Linux 7:
# systemctl stop glusterd # pkill glusterfs # pkill glusterfsd
On Red Hat Enterprise Linux 6:# service glusterd stop # pkill glusterfs # pkill glusterfsd
- If you use Samba:
- Enable the required repository.On Red Hat Enterprise Linux 6.7 or later:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
On Red Hat Enterprise Linux 7:# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- Stop the CTDB and SMB services across all nodes in the Samba cluster using the following command. Stopping the CTDB service also stops the SMB service.On Red Hat Enterprise Linux 7:
# systemctl stop ctdb # systemctl disable ctdb
On Red Hat Enterprise Linux 6:# service ctdb stop # chkconfig ctdb off
This ensures different versions of Samba do not run in the same Samba cluster until all Samba nodes are updated. - Verify that the CTDB and SMB services are stopped by running the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- Update the server using the following command:
# yum update
Take note of the packages being updated, and wait for the update to complete. - If a kernel update was included as part of the update process in the previous step, reboot the server.
- If a reboot of the server was not required, then start the gluster services on the storage server using the following command.On Red Hat Enterprise Linux 7:
# systemctl start glusterd
On Red Hat Enterprise Linux 6:# service glusterd start
- Verify that you have updated to the latest version of the Red Hat Gluster Storage server.
# gluster --version
Compare output with the desired version in Section 1.5, “Supported Versions of Red Hat Gluster Storage”. - Ensure that all bricks are online. To check the status, execute the following command:
# gluster volume status
- Start self-heal on the volume.
# gluster volume heal volname
- Ensure self-heal is complete on the replica using the following command:
# gluster volume heal volname info
- Verify that shared storage is mounted.
# mount | grep /run/gluster/shared_storage
- When all nodes in the volume have been updated, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 31102
Note
31102
is thecluster.op-version
value for the latest Red Hat Gluster Storage 3.3.1 glusterfs Async. Refer to Section 1.5, “Supported Versions of Red Hat Gluster Storage” for the correctcluster.op-version
value for other versions. - If you use Samba:
- Mount
/gluster/lock
before starting CTDB by executing the following command:# mount -a
- If all servers that host volumes accessed via SMB have been updated, then start and re-enable the CTDB and Samba services by executing the following commands.On Red Hat Enterprise Linux 7:
# systemctl start ctdb # systemctl enable ctdb
On Red Hat Enterprise Linux 6:# service ctdb start # chkconfig ctdb on
- To verify that the CTDB and SMB services have started, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- If you had a meta volume configured prior to this upgrade, and you did not reboot as part of the upgrade process, mount the meta volume:
# mount /var/run/gluster/shared_storage/
If this command does not work, review the content of/etc/fstab
and ensure that the entry for the shared storage is configured correctly and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- If this node is part of an NFS-Ganesha cluster:
- If SELinux is in use, set the
ganesha_use_fusefs
Boolean toon
.# setsebool -P ganesha_use_fusefs on
- Start the nfs-ganesha service:
# systemctl start nfs-ganesha
- Enable and start the cluster.
# pcs cluster enable # pcs cluster start
- Release the node from standby mode.
# pcs cluster unstandby
- Verify that the PCS cluster is running and that the volume is exporting correctly.
# pcs status # showmount -e
NFS-ganesha enters a short grace period after performing these steps. I/O operations halt during this grace period. Wait until you seeNFS Server Now NOT IN GRACE
in theganesha.log
file before continuing.
- If you use geo-replication, restart geo-replication sessions when upgrade is complete.
# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
Note
As a result of BZ#1347625, you may need to use theforce
parameter to successfully restart in some circumstances.# gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start force
- If you disabled the
disperse.optimistic-change-log
anddisperse.eager-lock
options in order to update an erasure-coded (dispersed) volume, re-enable these settings.# gluster volume set volname disperse.optimistic-change-log on # gluster volume set volname disperse.eager-lock on