Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

9.2. In-Service Software Upgrade to Upgrade from Red Hat Storage 2.1 to Red Hat Storage 3.0

In-service software upgrade refers to the ability to progressively update a Red Hat Storage Server cluster with a new version of the software without taking the volumes hosted on the cluster offline. In most cases normal I/O operations on the volume can continue even when the cluster is being updated under most circumstances. This method of updating the storage cluster is only supported for replicated and distributed-replicated volumes.
In-service software upgrade procedure is supported only from Red Hat Storage 2.1 to subsequent updates. To upgrade to Red Hat Storage 3.0 ensure you are on the immediate preceding update before proceeding with the following steps.

9.2.1. Pre-upgrade Tasks

Ensure you perform the following steps based on the set-up before proceeding with the in-service software upgrade process.

9.2.1.1. Upgrade Requirements for Red Hat Storage 3.0

The following are the upgrade requirements to upgrade to Red Hat Storage 3.0 from the latest preceding update:
  • In-service software upgrade is supported only for nodes with replicate and distributed replicate volumes.
  • Each brick must be independent thinly provisioned logical volume(LV).
  • The Logical Volume which contains the brick must not be used for any other purpose.
  • Recommended Setup

    In addition to the following list, you must ensure to read Chapter 9 Configuring Red Hat Storage for Enhancing Performance in the Red Hat Storage 3.0 Administration Guide for enhancing performance:
    • For each brick, create a dedicated thin pool that contains the brick of the volume and its (thin) brick snapshots. With the current thinly provisioned volume design, avoid placing the bricks of different gluster volumes in the same thin pool.
    • The recommended thin pool chunk size is 256KB. There might be exceptions to this in cases where we have a detailed information of the customer's workload.
    • The recommended pool metadata size is 0.1% of the thin pool size for a chunk size of 1MB or larger. In special cases, where we recommend a chunk size less than 256KB, use a pool metadata size of 0.5% of thin pool size.
  • When server-side quorum is enabled, ensure that bringing one node down does not violate server-side quorum. Add dummy peers to ensure the server-side quorum will not be violated until the completion of rolling upgrade using the following command:
    # gluster peer probe DummyNodeName
    For Example 1

    When the server-side quorum percentage is set to the default value (>50%), for a plain replicate volume with two nodes and one brick on each machine, a dummy node which does not contain any bricks must be added to the trusted storage pool to provide high availability of the volume using the command mentioned above.

    For Example 2

    In a three node cluster, if the server-side quorum percentage is set to 77%, then bringing down one node would violate the server-side quorum. In this scenario, you have to add two dummy nodes to meet server-side quorum.

  • If the client-side quorum is enabled then, run the following command to disable the client-side quorum:
    # gluster volume reset <vol-name> cluster.quorum-type

    Note

    When the client-side quorum is disabled, there are chances that the files might go into split-brain.
  • If there are any geo-replication sessions running between the master and slave, then stop this session by executing the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL stop
  • Ensure the Red Hat Storage server is registered to the required channels:
    rhel-x86_64-server-6
    rhel-x86_64-server-6-rhs-3
    rhel-x86_64-server-sfs-6
    To subscribe to the channels, run the following command:
    # rhn-channel --add --channel=<channel>

9.2.1.2. Restrictions for In-Service Software Upgrade

The following lists some of the restrictions for in-service software upgrade:
  • Do not perform in-service software upgrade when the I/O or load is high on the Red Hat Storage server.
  • Do not perform any volume operations on the Red Hat Storage server
  • Do not change the hardware configurations.
  • Do not run mixed versions of Red Hat Storage for an extended period of time. For example, do not have a mixed environment of Red Hat Storage 2.1 and Red Hat Storage 2.1 Update 1 for a prolonged time.
  • Do not combine different upgrade methods.
  • It is not recommended to use in-service software upgrade for migrating to thinly provisioned volumes, but to use offline upgrade scenario instead. For more information see, Section 9.1, “Updating Red Hat Storage in the Offline Mode”

9.2.1.3. Configuring repo for Upgrading using ISO

To configure the repo to upgrade using ISO, execute the following steps:

Note

Upgrading Red Hat Storage using ISO can be performed only from the previous release.
  1. Mount the ISO image file under any directory using the following command:
    mount -o loop <ISO image file> <mount-point>
    For example:
    mount -o loop RHSS-2.1U1-RC3-20131122.0-RHS-x86_64-DVD1.iso /mnt
  2. Set the repo options in a file in the following location:
     /etc/yum.repos.d/<file_name.repo>
  3. Add the following information to the repo file:
    [local]
    name=local
    baseurl=file:///mnt
    enabled=1
    gpgcheck=0

9.2.1.4. Preparing and Monitoring the Upgrade Activity

Before proceeding with the in-service software upgrade, prepare and monitor the following processes:
  • Check the peer status using the following commad:
    # gluster peer status
    For example:
    # gluster peer status
    Number of Peers: 2
    
    Hostname: 10.70.42.237
    Uuid: 04320de4-dc17-48ff-9ec1-557499120a43
    State: Peer in Cluster (Connected)
    
    Hostname: 10.70.43.148
    Uuid: 58fc51ab-3ad9-41e1-bb6a-3efd4591c297
    State: Peer in Cluster (Connected)
  • Check the volume status using the following command:
    # gluster volume status
    For example:
    # gluster volume status
    Status of volume: r2
    
    Gluster process                                         Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick 10.70.43.198:/brick/r2_0                          49152   Y       32259
    Brick 10.70.42.237:/brick/r2_1                          49152   Y       25266
    Brick 10.70.43.148:/brick/r2_2                          49154   Y       2857
    Brick 10.70.43.198:/brick/r2_3                          49153   Y       32270
    NFS Server on localhost                                 2049    Y       25280
    Self-heal Daemon on localhost                           N/A     Y       25284
    NFS Server on 10.70.43.148                              2049    Y       2871
    Self-heal Daemon on 10.70.43.148                        N/A     Y       2875
    NFS Server on 10.70.43.198                              2049    Y       32284
    Self-heal Daemon on 10.70.43.198                        N/A     Y       32288
     
    Task Status of Volume r2
    ------------------------------------------------------------------------------
    There are no active volume tasks
    
  • Check the rebalance status using the following command:
    # gluster volume rebalance r2 status
    Node   Rebalanced-files  size      scanned   failures    skipped   status   run time in secs
    ---------   -----------    ---------   --------   ---------  ------  --------  --------------
    10.70.43.198         0       0Bytes       99       0           0    completed     1.00
    10.70.43.148         49      196Bytes    100       0           0    completed     3.00
  • Ensure that there are no pending self-heals before proceeding with in-service software upgrade using the following command:
    # gluster volume heal volname info
    The following example shows a self-heal completion:
    # gluster volume heal drvol info 
    Gathering list of entries to be healed on volume drvol has been successful 
    
    Brick 10.70.37.51:/rhs/brick1/dir1 
    Number of entries: 0 
    
    Brick 10.70.37.78:/rhs/brick1/dir1 
    Number of entries: 0 
    
    Brick 10.70.37.51:/rhs/brick2/dir2 
    Number of entries: 0 
    
    Brick 10.70.37.78:/rhs/brick2/dir2 
    Number of entries: 0

9.2.2. Upgrade Process with Service Impact

In-service software upgrade will impact the following services. Ensure you take the required precautionary measures.
SWIFT

ReST requests that are in transition will fail during in-service software upgrade. Hence it is recommended to stop all swift services before in-service software upgrade using the following commands:

# service openstack-swift-proxy stop
# service openstack-swift-account stop
# service openstack-swift-container stop
# service openstack-swift-object stop
NFS

When you NFS mount a volume, any new or outstanding file operations on that file system will hang uninterruptedly during in-service software upgrade until the server is upgraded.

Samba / CTDB

Ongoing I/O on Samba shares will fail as the Samba shares will be temporarily unavailable during the in-service software upgrade, hence it is recommended to stop the Samba service using the following command:

# service ctdb stop   ;Stopping CTDB will also stop the SMB service.

Distribute Volume

In-service software upgrade is not supported for distributed volume. In case you have a distributed volume in the cluster, stop that volume using the following command:

# gluster volume stop <VOLNAME>
Start the volume after in-service software upgrade is complete using the following command:
# gluster volume start <VOLNAME>
Virtual Machine Store

The virtual machine images are likely to be modified constantly. The virtual machine listed in the output of the volume heal command does not imply that the self-heal of the virtual machine is incomplete. It could mean that the modifications on the virtual machine are happening constantly.

Hence, if you are using a gluster volume for storing virtual machine images (Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and Red Hat OpenStack), then it is recommended to power-off all virtual machine instances before in-service software upgrade.

9.2.3. In-Service Software Upgrade

The following steps have to be performed on each node of the replica pair:
  1. Back up the following configuration directory and files on the backup directory:
    /var/lib/glusterd, /etc/swift, /etc/samba, /etc/ctdb, /etc/glusterfs, /var/lib/samba, /var/lib/ctdb
    # cp -a /var/lib/glusterd /backup-disk/
    # cp -a /etc/swift /backup-disk/
    # cp -a /etc/samba /backup-disk/
    # cp -a /etc/ctdb /backup-disk/
    # cp -a /etc/glusterfs /backup-disk/
    # cp -a /var/lib/samba /backup-disk/
    # cp -a /var/lib/ctdb /backup-disk/
    

    Note

  2. Stop the gluster services on the storage server using the following commands:
    # service glusterd stop 
    # pkill glusterfs 
    # pkill glusterfsd
  3. To check the system's current subscription status run the following command:
    # migrate-rhs-classic-to-rhsm --status
  4. Install the required packages using the following command:
    # yum install subscription-manager-migration
    # yum install subscription-manager-migration-data
  5. Execute the following command to migrate from Red Hat Network Classic to Red Hat Subscription Manager
    # migrate-rhs-classic-to-rhsm --rhn-to-rhsm
  6. To enable the Red Hat Storage 3.0 repos, execute the following command:
    # migrate-rhs-classic-to-rhsm --upgrade --version 3
  7. To verify if the migration from Red Hat Network Classic to Red Hat Subscription Manager is successful, execute the following command:
    # migrate-rhs-classic-to-rhsm --status
  8. Update the server using the following command:
    # yum update
  9. If the volumes are thickly provisioned, then perform the following steps to migrate to thinly provisioned volumes:

    Note

    Migrating from thickly provisioned volume to thinly provisioned volume during in-service-software-upgrade takes a significant amount of time based on the data you have in the bricks. You must migrate only if you plan on using snapshots for your existing environment and plan to be online during the upgrade. If you do not plan to use snapshots, you can skip the migration steps from LVM to thinp. However, if you plan to use snapshots on your existing environment, the offline method to upgrade is recommended. For more information regarding offline upgrade, see Section 9.1, “Updating Red Hat Storage in the Offline Mode”
    Contact a Red Hat Support representative before migrating from thickly provisioned volumes to thinly provisioned volumes using in-service software upgrade.
    1. Unmount all the bricks associated with the volume by executing the following command:
      # umount mount point
      For example:
      # umount /dev/RHS_vg/brick1
    2. Remove the LVM associated with the brick by executing the following command:
      # lvremove logical_volume_name
      For example:
      # lvremove /dev/RHS_vg/brick1
    3. Remove the volume group by executing the following command:
      # vgremove -ff volume_group_name
      For example:
      vgremove -ff RHS_vg
    4. Remove the physical volume by executing the following command:
      # pvremove -ff physical_volume
    5. If the physical volume(PV) not created then create the PV for a RAID 6 volume by executing the following command, else proceed with the next step:
      # pvcreate --dataalignment 2560K /dev/vdb
      For more information refer Section 12.1 Prerequisites in the Red Hat Storage 3 Administration Guide
    6. Create a single volume group from the PV by executing the following command:
      # vgcreate volume_group_name disk
      For example:
      vgcreate RHS_vg /dev/vdb
    7. Create a thinpool using the following command:
      # lvcreate -L size --poolmetadatasize md size --chunksize chunk size -T pool device
      For example:
      lvcreate -L 2T --poolmetadatasize 16G --chunksize 256  -T /dev/RHS_vg/thin_pool
    8. Create a thin volume from the pool by executing the following command:
      # lvcreate -V size -T pool device -n thinp
      For example:
      lvcreate -V 1.5T -T /dev/RHS_vg/thin_pool -n thin_vol
    9. Create filesystem in the new volume by executing the following command:
      mkfs.xfs -i size=512 thin pool device
      For example:
      mkfs.xfs -i size=512 /dev/RHS_vg/thin_vol
      The back-end is now converted to a thinly provisioned volume.
    10. Mount the thinly provisioned volume to the brick directory and setup the extended attributes on the bricks. For example:
      setfattr -n trusted.glusterfs.volume-id \ -v 0x$(grep volume-id /var/lib/glusterd/vols/volname/info \ | cut -d= -f2 | sed 's/-//g') $brick
  10. To ensure Red Hat Storage Server node is healthy after reboot and so that it can then be joined back to the cluster, it is recommended that you disable glusterd during boot using the following command:
    # chkconfig glusterd off
  11. Reboot the server.
  12. Perform the following operations to change the Automatic File Replication extended attributes so that the heal process happens from a brick in the replica subvolume to the thin provisioned brick.
    1. Create a FUSE mount point from any server to edit the extended attributes. Using the NFS and CIFS mount points, you will not be able to edit the extended attributes.
      Note that /mnt/r2 is the FUSE mount path.
    2. Create a new directory on the mount point and ensure that a directory with such a name is not already present.
      # mkdir /mnt/r2/name-of-nonexistent-dir
    3. Delete the directory and set the extended attributes.
      # rmdir /mnt/r2/name-of-nonexistent-dir
      #setfattr -n trusted.non-existent-key -v abc /mnt/r2 
      #setfattr -x trusted.non-existent-key /mnt/r2
    4. Ensure that the extended attributes of the brick in the replica subvolume(In this example, brick: /dev/RHS_vg/brick2 , extended attribute: trusted.afr.r2-client-1), is not set to zero.
      #getfattr -d -m. -e hex /dev/RHS_vg/brick2 # file: /dev/RHS_vg/brick2 
          security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 
          trusted.afr.r2-client-0=0x000000000000000000000000  
          trusted.afr.r2-client-1=0x000000000000000300000002
          trusted.gfid=0x00000000000000000000000000000001 
          trusted.glusterfs.dht=0x0000000100000000000000007ffffffe 
          trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
  13. Start the glusterd service using the following command:
    # service glusterd start
  14. To automatically start the glusterd daemon every time the system boots, run the following command:
    # chkconfig glusterd on
  15. Start self-heal on the volume.
    # gluster volume heal vol-name full
  16. To verify if you have upgraded to the latest version of the Red Hat Storage server execute the following command:
    # gluster --version
  17. Ensure that all the bricks are online. To check the status execute the following command:
    # gluster volume status
    For example:
    # gluster volume status
    Status of volume: r2
    
    Gluster process                                         Port    Online  Pid
    ------------------------------------------------------------------------------
    Brick 10.70.43.198:/brick/r2_0                          49152   Y       32259
    Brick 10.70.42.237:/brick/r2_1                          49152   Y       25266
    Brick 10.70.43.148:/brick/r2_2                          49154   Y       2857
    Brick 10.70.43.198:/brick/r2_3                          49153   Y       32270
    NFS Server on localhost                                 2049    Y       25280
    Self-heal Daemon on localhost                           N/A     Y       25284
    NFS Server on 10.70.43.148                              2049    Y       2871
    Self-heal Daemon on 10.70.43.148                        N/A     Y       2875
    NFS Server on 10.70.43.198                              2049    Y       32284
    Self-heal Daemon on 10.70.43.198                        N/A     Y       32288
     
    Task Status of Volume r2
    ------------------------------------------------------------------------------
    There are no active volume tasks
  18. Ensure self-heal is complete on the replica using the following command:
    # gluster volume heal volname info
    The following example shows self heal completion:
     # gluster volume heal drvol info 
    Gathering list of entries to be healed on volume drvol has been successful 
    
    Brick 10.70.37.51:/rhs/brick1/dir1 
    Number of entries: 0 
    
    Brick 10.70.37.78:/rhs/brick1/dir1 
    Number of entries: 0 
    
    Brick 10.70.37.51:/rhs/brick2/dir2 
    Number of entries: 0 
    
    Brick 10.70.37.78:/rhs/brick2/dir2 
    Number of entries: 0
  19. Repeat the above steps on the other node of the replica pair.

    Note

    In the case of a distributed-replicate setup, repeat the above steps on all the replica pairs.
  20. After upgrading all the nodes, ensure to bump up the op-version of the cluster by executing the following command:
    # gluster volume set all cluster.op-version 30000

    Note

    If you want to enable Snapshot, see Section 12.4. Troubleshooting in the Red Hat Storage 3 Administration Guide.
  21. If the client-side quorum was disabled before upgrade, then upgrade it by executing the following command:
    # gluster volume set volname cluster.quorum-type auto
  22. If the geo-replication session between master and slave was disabled before upgrade, then restart the session by executing the following command:
    # gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start

9.2.4. Special Consideration for In-Service Software Upgrade

The following sections describe the in-service software upgrade steps for a CTDB setup.

9.2.4.1. In-Service Software Upgrade for a CTDB Setup

Before you upgrade the CTDB packages, ensure you upgrade the Red Hat Storage server by following these steps. The following steps have to be performed on each node of the replica pair.
  1. To ensure that the CTDB does not start automatically after a reboot run the following command on each node of the CTDB cluster:
    # chkconfig ctdb off
  2. Stop the CTDB service on the Red Hat Storage node using the following command on each node of the CTDB cluster:
    # service ctdb stop
    1. To verify if the CTDB and SMB services are stopped, execute the following command:
      ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
  3. Stop the gluster services on the storage server using the following commands:
    # service glusterd stop 
    # pkill glusterfs 
    # pkill glusterfsd
  4. In /etc/fstab, comment out the line containing the volume used for CTDB service as shown in the following example:
    # HostName:/volname  /gluster/lock glusterfs defaults,transport=tcp 0 0
  5. Update the server using the following command:
    # yum update
  6. To ensure the glusterd service does not start automatically after reboot, execute the following command:
    # chkconfig glusterd off
  7. Reboot the server.
  8. Update the META=all with the gluster volume information in the following scripts:
    /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
  9. In /etc/fstab, uncomment the line containing the volume used for CTDB service as shown in the following example:
    HostName:/volname /gluster/lock glusterfs defaults,transport=tcp 0 0
  10. To automatically start the glusterd daemon every time the system boots, run the following command:
    # chkconfig glusterd on
  11. To automatically start the ctdb daemon every time the system boots, run the following command:
    # chkconfig ctdb on
  12. Start the glusterd service using the following command:
    # service glusterd start
  13. Mount the CTDB volume by running the following command:
    # mount -a
  14. Start the CTDB service using the following command:
    # service ctdb start
  15. To verify if CTDB is running successfully, execute the following commands:
    # ctdb status
    # ctdb ip
    # ctdb ping -n all
CTDB Upgrade

After upgrading the Red Hat Storage server, upgrade the CTDB package by executing the following steps:

Note

  • Upgrading CTDB on all the nodes must be done simultaneously to avoid any data corruption.
  • The following steps have to performed only when upgrading CTDB from CTDB 1.x to CTDB 2.x.
  1. Stop the CTDB service on all the nodes of the CTDB cluster by executing the following command. Ensure it is performed on all the nodes simultaneously as two different versions of CTDB cannot run at the same time in the CTDB cluster:
    # service ctdb stop
  2. Perform the following operations on all the nodes used as samba servers:
    • Remove the following soft links:
      /etc/sysconfig/ctdb 
      /etc/ctdb/nodes
      /etc/ctdb/public_addresses
    • Copy the following files from the CTDB volume to the corresponding location by executing the following command on each node of the CTDB cluster:
      cp /gluster/lock/nodes /etc/ctdb/nodes
      cp /gluster/lock/public_addresses /etc/ctdb/public_addresses
  3. Stop and delete the CTDB volume by executing the following commands on one of the nodes of the CTDB cluster:
    # gluster volume stop volname
    # gluster volume delete volname
  4. To remove the existing CTDB package execute the following command:
    # yum remove ctdb
  5. To install CTDB, execute the following command:
    # yum install ctdb2.5
For more information about configuring CTDB on a Red Hat Storage server, see Section 7.5.1 Setting Up CTDB in the Red Hat Storage 3 Administration Guide

9.2.4.2. Verifying In-Service Software Upgrade

To verify if you have upgraded to the latest version of the Red Hat Storage server execute the following command:
# gluster --version

9.2.4.3. Upgrading the Native Client

All the clients must be of same version. Red Hat strongly recommends you to upgrade the servers before you upgrading the clients. For more information regarding installing and upgrading native client refer Section 9.2 Native Client in the Red Hat Storage Administration Guide.