6.4. Upgrading to Red Hat Gluster Storage 3.5
Disable all repositories
# subscription-manager repos --disable=’*’
Subscribe to the Red Hat Enterprise Linux 7 channel
# subscription-manager repos --enable=rhel-7-server-rpms
Check for stale Red Hat Enterprise Linux 6 packages
Check for any stale Red Hat Enterprise Linux 6 packages post upgrade:# rpm -qa | grep el6
Important
If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.Update and reboot
Update the Red Hat Enterprise Linux 7 packages and reboot.# yum update # reboot
Verify the version number
Ensure that the latest version of Red Hat Enterprise Linux 6 is shown when you view the `redhat-release` file:# cat /etc/redhat-release
Subscribe to the required channels
- Subscribe to the Gluster channel:
# subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
- If you use Samba, enable its repository.
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- If you use NFS-Ganesha, enable its repository.
# subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
- If you use gdeploy, enable the Ansible repository:
# subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
Install and update Gluster
- If you used a Red Hat Enterprise Linux 7 ISO, install Red Hat Gluster Storage 3.5 using the following command:
# yum install redhat-storage-server
This is already installed if you used a Red Hat Gluster Storage 3.5 ISO based on Red Hat Enterprise Linux 7. - Update Red Hat Gluster Storage to the latest packages using the following command:
# yum update
Verify the installation and update
- Check the current version number of the updated Red Hat Gluster Storage system:
# cat /etc/redhat-storage-release
Important
The version number should be3.5
. - Ensure that no Red Hat Enterprise Linux 6 packages are present:
# rpm -qa | grep el6
Important
If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
Install and configure Firewalld
- Install and start the firewall daemon using the following commands:
# yum install firewalld # systemctl start firewalld
- Add the Gluster process to the firewall:
# firewall-cmd --zone=public --add-service=glusterfs --permanent
- Add the required services and ports to firewalld. For more information see Considerations for Red Hat Gluster Storage
- Reload the firewall using the following commands:
# firewall-cmd --reload
Start the Gluster processes
- Start the
glusterd
process:# systemctl start glusterd
Update Gluster op-version
Update the Gluster op-version to the required maximum version using the following commands:# gluster volume get all cluster.max-op-version # gluster volume set all cluster.op-version op_version
Note
70200
is thecluster.op-version
value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:gluster volume heal $VOLNAME granular-entry-heal enable
The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correctcluster.op-version
value for other versions.Set up Samba and CTDB
If the Gluster setup on Red Hat Enterprise Linux 6 had Samba and CTDB configured, you should have the following available on the updated Red Hat Enterprise Linux 7 system:- CTDB volume
/etc/ctdb/nodes
file/etc/ctdb/public_addresses
file
Perform the following steps to reconfigure Samba and CTDB:- Configure the firewall for Samba:
# firewall-cmd --zone=public --add-service=samba --permanent # firewall-cmd --zone=public --add-port=4379/tcp --permanent
- Subscribe to the Samba channel:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
- Update Samba to the latest packages:
# yum update
- Configure CTDB for Samba. For more information, see Configuring CTDB on Red Hat Gluster Storage Server in Setting up CTDB for Samba. You must skip creating the volume as the volumes present before the upgrade would be persistent after the upgrade.
- In the following files, replace
all
in the statementMETA="all"
with the volume name:/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
For example, the volume name isctdb_volname
, theMETA="all"
in the files should be changed toMETA="ctdb_volname"
. - Restart the CTDB volume using the following commands:
# gluster volume stop volume_name # gluster volume start volume_name
- Start the CTDB process:
# systemctl start ctdb
- Share the volume over Samba if required. See Sharing Volumes over SMB.
Start the volumes and geo-replication
- Start the required volumes using the following command:
# gluster volume start volume_name
- Mount the meta-volume:
# mount /var/run/gluster/shared_storage/
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .If this command does not work, review the content of the/etc/fstab
file and ensure that the entry for the shared storage is configured correctly, and re-run themount
command. The line for the meta volume in the/etc/fstab
file should look like the following:hostname:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0
- Restore the geo-replication session:
#
gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
For more information on geo-replication, see Preparing to Deploy Geo-replication.