6.4. Upgrading to Red Hat Gluster Storage 3.5

  1. Disable all repositories

    # subscription-manager repos --disable=’*’
  2. Subscribe to the Red Hat Enterprise Linux 7 channel

    # subscription-manager repos --enable=rhel-7-server-rpms
  3. Check for stale Red Hat Enterprise Linux 6 packages

    Check for any stale Red Hat Enterprise Linux 6 packages post upgrade:
    # rpm -qa | grep el6

    Important

    If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
  4. Update and reboot

    Update the Red Hat Enterprise Linux 7 packages and reboot.
    # yum update
    # reboot
  5. Verify the version number

    Ensure that the latest version of Red Hat Enterprise Linux 6 is shown when you view the `redhat-release` file:
    # cat /etc/redhat-release
  6. Subscribe to the required channels

    1. Subscribe to the Gluster channel:
      # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
    2. If you use Samba, enable its repository.
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
    3. If you use NFS-Ganesha, enable its repository.
      # subscription-manager repos --enable=rh-gluster-3-nfs-for-rhel-7-server-rpms --enable=rhel-ha-for-rhel-7-server-rpms
    4. If you use gdeploy, enable the Ansible repository:
      # subscription-manager repos --enable=rhel-7-server-ansible-2-rpms
  7. Install and update Gluster

    1. If you used a Red Hat Enterprise Linux 7 ISO, install Red Hat Gluster Storage 3.5 using the following command:
      # yum install redhat-storage-server
      This is already installed if you used a Red Hat Gluster Storage 3.5 ISO based on Red Hat Enterprise Linux 7.
    2. Update Red Hat Gluster Storage to the latest packages using the following command:
      # yum update
  8. Verify the installation and update

    1. Check the current version number of the updated Red Hat Gluster Storage system:
      # cat /etc/redhat-storage-release

      Important

      The version number should be 3.5.
    2. Ensure that no Red Hat Enterprise Linux 6 packages are present:
      # rpm -qa | grep el6

      Important

      If the output lists packages of Red Hat Enterprise Linux 6 variant, contact Red Hat Support for further course of action on these packages.
  9. Install and configure Firewalld

    1. Install and start the firewall daemon using the following commands:
      # yum install firewalld
      # systemctl start firewalld
    2. Add the Gluster process to the firewall:
      # firewall-cmd --zone=public --add-service=glusterfs --permanent
    3. Add the required services and ports to firewalld. For more information see Considerations for Red Hat Gluster Storage
    4. Reload the firewall using the following commands:
      # firewall-cmd --reload
  10. Start the Gluster processes

    1. Start the glusterd process:
      # systemctl start glusterd
  11. Update Gluster op-version

    Update the Gluster op-version to the required maximum version using the following commands:
    # gluster volume get all cluster.max-op-version
    # gluster volume set all cluster.op-version op_version

    Note

    70200 is the cluster.op-version value for Red Hat Gluster Storage 3.5. Ater upgrading the cluster-op version, enable the granular-entry-heal for the volume via the given command:
    gluster volume heal $VOLNAME granular-entry-heal enable
    The feature is now enabled by default post upgrade to Red Hat Gluster Storage 3.5, but this will come into affect only after bumping up the op-version. Refer to Section 1.5, “Red Hat Gluster Storage Software Components and Versions” for the correct cluster.op-version value for other versions.
  12. Set up Samba and CTDB

    If the Gluster setup on Red Hat Enterprise Linux 6 had Samba and CTDB configured, you should have the following available on the updated Red Hat Enterprise Linux 7 system:
    • CTDB volume
    • /etc/ctdb/nodes file
    • /etc/ctdb/public_addresses file
    Perform the following steps to reconfigure Samba and CTDB:
    1. Configure the firewall for Samba:
      # firewall-cmd --zone=public  --add-service=samba --permanent
      # firewall-cmd --zone=public  --add-port=4379/tcp --permanent
    2. Subscribe to the Samba channel:
      # subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-7-server-rpms
    3. Update Samba to the latest packages:
      # yum update
    4. Configure CTDB for Samba. For more information, see Configuring CTDB on Red Hat Gluster Storage Server in Setting up CTDB for Samba. You must skip creating the volume as the volumes present before the upgrade would be persistent after the upgrade.
    5. In the following files, replace all in the statement META="all" with the volume name:
      /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
      /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
      For example, the volume name is ctdb_volname, the META="all" in the files should be changed to META="ctdb_volname".
    6. Restart the CTDB volume using the following commands:
      # gluster volume stop volume_name
      # gluster volume start volume_name
    7. Start the CTDB process:
      # systemctl start ctdb
    8. Share the volume over Samba if required. See Sharing Volumes over SMB.
  13. Start the volumes and geo-replication

    1. Start the required volumes using the following command:
      # gluster volume start volume_name
    2. Mount the meta-volume:
      # mount /var/run/gluster/shared_storage/

      Note

      With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
      If this command does not work, review the content of the /etc/fstab file and ensure that the entry for the shared storage is configured correctly, and re-run the mount command. The line for the meta volume in the /etc/fstab file should look like the following:
      hostname:/gluster_shared_storage   /var/run/gluster/shared_storage/   glusterfs   defaults   0 0
    3. Restore the geo-replication session:
      # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
      For more information on geo-replication, see Preparing to Deploy Geo-replication.