-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Gluster Storage
Chapter 9. Upgrading from Red Hat Gluster Storage 2.1.x to Red Hat Gluster Storage 3.1
This chapter describes the procedure to upgrade to Red Hat Gluster Storage 3.1 from Red Hat Gluster Storage 3.0.
Note
Upgrading from Red Hat Enterprise Linux 6 based Red Hat Gluster Storage to Red Hat Enterprise Linux 7 based Red Hat Gluster Storage is not supported.
9.1. Offline Upgrade from Red Hat Gluster Storage 2.1.x to Red Hat Gluster Storage 3.1
9.1.1. Upgrading from Red Hat Gluster Storage 2.1.x to Red Hat Gluster Storage 3.1 for Systems Subscribed to Red Hat Network
Pre-Upgrade Steps:
- Unmount the clients using the following command:
umount mount-point
- Stop the volumes using the following command:
gluster volume stop volname
- Unmount the data partition(s) on the servers using the following command:
umount mount-point
- To verify if the volume status is stopped, use the following command:
# gluster volume info
If there is more than one volume, stop all of the volumes. - Stop the
glusterd
services on all the servers using the following command:# service glusterd stop
yum Upgrade Steps:
Important
- You can upgrade to Red Hat Gluster Storage 3.1 from Red Hat Gluster Storage 2.1 Update 4 or later. If your current version is lower than Update 4, then upgrade it to Update 4 before upgrading to Red Hat Gluster Storage 3.1.
- Upgrade the servers before upgrading the clients.
- Execute the following command to kill all gluster processes:
# pkill gluster
- To check the system's current subscription status run the following command:
# migrate-rhs-classic-to-rhsm --status
Note
Themigrate-rhs-classic-to-rhsm
command is only available in Red Hat Gluster Storage 2.1 Update 4 or higher. If your system doesn't have this command, ensure that you have updated the redhat-storage-release package to the latest version. - Execute the following command to migrate from Red Hat Network Classic to Red Hat Subscription Manager.
# migrate-rhs-classic-to-rhsm --rhn-to-rhsm
- Enable the Red Hat Gluster Storage 3.0 repositories with the following command:
# migrate-rhs-classic-to-rhsm --upgrade --version 3
- If you require Samba, and you are using Red Hat Gluster Storage 3.0.4 or later on Red Hat Enterprise Linux 6.7, enable the following repository:
# subscription-manager repos --enable=rh-gluster-3-samba-for-rhel-6-server-rpms
Warning
- The Samba version 3 is being deprecated from Red Hat Gluster Storage 3.0 Update 4. Further updates will not be provided for samba-3.x. It is recommended that you upgrade to Samba-4.x, which is provided in a separate channel or repository, for all updates including the security updates.
- Downgrade of Samba from Samba 4.x to Samba 3.x is not supported.
- Ensure that Samba is upgraded on all the nodes simultaneously, as running different versions of Samba in the same cluster will lead to data corruption.
- Stop the CTDB and SMB services across all nodes in the Samba cluster using the following command. This is because different versions of Samba cannot run in the same Samba cluster.
# service ctdb stop
- To verify if the CTDB and SMB services are stopped, execute the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- To verify if the migration from Red Hat Network Classic to Red Hat Subscription Manager is successful, execute the following command:
# migrate-rhs-classic-to-rhsm --status
- To upgrade the server from Red Hat Gluster Storage 3.0 to 3.1, use the following command:
# yum update
Note
It is recommended to add the child channel of Red Hat Enterprise Linux 6 that contains the native client to refresh the clients and access the new features in Red Hat Gluster Storage 3.1. For more information, refer to Installing Native Client in the Red Hat Gluster Storage Administration Guide. - Reboot the servers. This is required as the kernel is updated to the latest version.
- When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 30707
9.1.2. Upgrading from Red Hat Gluster Storage 2.1.x to Red Hat Gluster Storage 3.1 for Systems Subscribed to Red Hat Satellite Server
- Unmount all the clients using the following command:
umount mount-name
- Stop the volumes using the following command:
# gluster volume stop volname
- Unmount the data partition(s) on the servers using the following command:
umount mount-point
- Ensure that the Red Hat Gluster Storage 2.1 server is updated to Red Hat Gluster Storage 2.1 Update 4 or later, by running the following command:
# yum update
- Create an Activation Key at the Red Hat Satellite Server, and associate it with the following channels. For more information, refer to Section 2.5, “Installing from Red Hat Satellite Server”
Base Channel: Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) Child channels: RHEL Server Scalable File System (v. 6 for x86_64) Red Hat Gluster Storage Server 3 (RHEL 6 for x86_64)
- For Red Hat Gluster Storage 3.0.4 or later, if you require the Samba package add the following child channel:
Red Hat Gluster 3 Samba (RHEL 6 for x86_64)
- Unregister your system from Red Hat Satellite by following these steps:
- Log in to the Red Hat Satellite server.
- Click on the Systems tab in the top navigation bar and then the name of the old or duplicated system in the System List.
- Click the delete system link in the top-right corner of the page.
- To confirm the system profile deletion by clicking the Delete System button.
- On the updated Red Hat Gluster Storage 3.0 Update 4 server, run the following command:
# rhnreg_ks --username username --password password --force --activationkey Activation Key ID
This uses the prepared Activation Key and re-registers the system to the Red Hat Gluster Storage 3.0 channels on the Red Hat Satellite Server. - Verify if the channel subscriptions have changed to the following:
# rhn-channel --list rhel-x86_64-server-6 rhel-x86_64-server-6-rhs-3 rhel-x86_64-server-sfs-6
For Red Hat Gluster Storage 3.0.4 or later, if you have enabled the Samba channel, then verify if you have the following channel:rhel-x86_64-server-6-rh-gluster-3-samba
- Run the following command to upgrade to Red Hat Gluster Storage 3.0.
# yum update
- Reboot, and run volume and data integrity checks.
- When all nodes have been upgraded, run the following command to update the
op-version
of the cluster. This helps to prevent any compatibility issues within the cluster.# gluster volume set all cluster.op-version 30707
9.1.3. Upgrading from Red Hat Gluster Storage 2.1.x to Red Hat Gluster Storage 3.1 using an ISO
This method re-images the software in the storage server by keeping the data intact after a backup-restore of the configuration files. This method is quite invasive and should only be used if a local yum repository or an Internet connection to access Red Hat Network is not available.
The preferable method to upgrade is using the
yum
command. For more information, refer to Section 9.1.1, “Upgrading from Red Hat Gluster Storage 2.1.x to Red Hat Gluster Storage 3.1 for Systems Subscribed to Red Hat Network”.
Note
- Ensure that you perform the steps listed in this section on all the servers.
- In the case of a geo-replication set-up, perform the steps listed in this section on all the master and slave servers.
- You cannot access data during the upgrade process, and a downtime should be scheduled with applications, clients, and other end-users.
- Get the volume information and peer status using the following commands:
# gluster volume info
The command displays the volume information similar to the following:Volume Name: volname Type: Distributed-Replicate Volume ID: d6274441-65bc-49f4-a705-fc180c96a072 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/rhs/brick1/brick1 Brick2: server2:/rhs/brick1/brick2 Brick3: server3:/rhs/brick1/brick3 Brick4: server4:/rhs/brick1/brick4 Options Reconfigured: geo-replication.indexing: on
# gluster peer status
The command displays the peer status information similar to the following:# gluster peer status Number of Peers: 3 Hostname: server2 Port: 24007 Uuid: 2dde2c42-1616-4109-b782-dd37185702d8 State: Peer in Cluster (Connected) Hostname: server3 Port: 24007 Uuid: 4224e2ac-8f72-4ef2-a01d-09ff46fb9414 State: Peer in Cluster (Connected) Hostname: server4 Port: 24007 Uuid: 10ae22d5-761c-4b2e-ad0c-7e6bd3f919dc State: Peer in Cluster (Connected)
Note
Make a note of this information to compare with the output after upgrading. - In case of a geo-replication set-up, stop the geo-replication session using the following command:
# gluster volume geo-replication master_volname slave_node::slave_volname stop
- In case of a CTDB/Samba set-up, stop the CTDB service using the following command:
# service ctdb stop ;Stopping the CTDB service also stops the SMB service
- Verify if the CTDB and the SMB services are stopped using the following command:
ps axf | grep -E '(ctdb|smb|winbind|nmb)[d]'
- In case of an object store set-up, turn off object store using the following commands:
# service gluster-swift-proxy stop # service gluster-swift-account stop # service gluster-swift-container stop # service gluster-swift-object stop
- Stop all the gluster volumes using the following command:
# gluster volume stop volname
- Stop the
glusterd
services on all the nodes using the following command:# service glusterd stop
- If there are any gluster processes still running, terminate the process using
kill
. - Ensure all gluster processes are stopped using the following command:
# pgrep gluster
- Back up the following configuration directory and files on the backup directory:
/var/lib/glusterd
,/etc/swift
,/etc/samba
,/etc/ctdb
,/etc/glusterfs
./var/lib/samba
,/var/lib/ctdb
Ensure that the backup directory is not the operating system partition.# cp -a /var/lib/glusterd /backup-disk/ # cp -a /etc/swift /backup-disk/ # cp -a /etc/samba /backup-disk/ # cp -a /etc/ctdb /backup-disk/ # cp -a /etc/glusterfs /backup-disk/ # cp -a /var/lib/samba /backup-disk/ # cp -a /var/lib/ctdb /backup-disk/
Also, back up any other files or configuration files that you might require to restore later. You can create a backup of everything in/etc/
. - Locate and unmount the data disk partition that contains the bricks using the following command:
# mount | grep backend-disk # umount /dev/device
For example, use thegluster volume info
command to display the backend-disk information:Volume Name: volname Type: Distributed-Replicate Volume ID: d6274441-65bc-49f4-a705-fc180c96a072 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/rhs/brick1/brick1 Brick2: server2:/rhs/brick1/brick2 Brick3: server3:/rhs/brick1/brick3 Brick4: server4:/rhs/brick1/brick4 Options Reconfigured: geo-replication.indexing: on
In the above example, the backend-disk is mounted at /rhs/brick1# findmnt /rhs/brick1 TARGET SOURCE FSTYPE OPTIONS /rhs/brick1 /dev/mapper/glustervg-brick1 xfs rw,relatime,attr2,delaylog,no # umount /rhs/brick1
- Insert the DVD with Red Hat Gluster Storage 3.1 ISO and reboot the machine. The installation starts automatically. You must install Red Hat Gluster Storage on the system with the same network credentials, IP address, and host name.
Warning
During installation, while creating a custom layout, ensure that you choose Create Custom Layout to proceed with installation. If you choose Replace Existing Linux System(s), it formats all disks on the system and erases existing data.Select Create Custom Layout. Click Next.Figure 9.1. Custom Layout Window
- Select the disk on which to install Red Hat Gluster Storage. Click Next.For Red Hat Gluster Storage to install successfully, you must select the same disk that contained the operating system data previously.
Warning
While selecting your disk, do not select the disks containing bricks.Figure 9.2. Select Disk Partition Window
- After installation, ensure that the host name and IP address of the machine is the same as before.
Warning
If the IP address and host name are not the same as before, you will not be able to access the data present in your earlier environment. - After installation, the system automatically starts
glusterd
. Stop the gluster service using the following command:# service glusterd stop Stopping glusterd: [OK]
- Add entries to
/etc/fstab
to mount data disks at the same path as before.Note
Ensure that the mount points exist in your trusted storage pool environment. - Mount all data disks using the following command:
# mount -a
- Back up the latest
glusterd
using the following command:# cp -a /var/lib/glusterd /var/lib/glusterd-backup
- Copy
/var/lib/glusterd
and/etc/glusterfs
from your backup disk to the OS disk.# cp -a /backup-disk/glusterd/* /var/lib/glusterd # cp -a /backup-disk/glusterfs/* /etc/glusterfs
Note
Do not restore the swift, samba and ctdb configuration files from the backup disk. However, any changes in swift, samba, and ctdb must be applied separately in the new configuration files from the backup taken earlier. - Copy back the latest hooks scripts to
/var/lib/glusterd/hooks
.# cp -a /var/lib/glusterd-backup/hooks /var/lib/glusterd
- Ensure you restore any other files from the backup that was created earlier.
- You must restart the
glusterd
management daemon using the following commands:# glusterd --xlator-option *.upgrade=yes -N # service glusterd start Starting glusterd: [OK]
- Start the volume using the following command:
# gluster volume start volname force volume start: volname : success
Note
Repeat the above steps on all the servers in your trusted storage pool environment. - In case you have a pure replica volume (1*n) where n is the replica count, perform the following additional steps:
- Run the
fix-layout
command on the volume using the following command:# gluster volume rebalance volname fix-layout start
- Wait for the
fix-layout
command to complete. You can check the status for completion using the following command:# gluster volume rebalance volname status
- Stop the volume using the following command:
# gluster volume stop volname
- Force start the volume using the following command:
# gluster volume start volname force
- In case of an Object Store set-up, any configuration files that were edited should be renamed to end with a
.rpmsave
file extension, and other unedited files should be removed. - Re-configure the Object Store. For information on configuring Object Store, refer to Section 18.5 in Chapter 18. Managing Object Store of the Red Hat Gluster Storage Administration Guide.
- Get the volume information and peer status of the created volume using the following commands:
# gluster volume info # gluster peer status
Ensure that the output of these commands has the same values that they had before you started the upgrade.Note
In Red Hat Gluster Storage 3.0, thegluster peer status
output does not display the port number. - Verify the upgrade.
- If all servers in the trusted storage pool are not upgraded, the
gluster peer status
command displays the peers as disconnected or rejected.The command displays the peer status information similar to the following:# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 2dde2c42-1616-4109-b782-dd37185702d8 State: Peer Rejected (Connected) Hostname: server3 Uuid: 4224e2ac-8f72-4ef2-a01d-09ff46fb9414 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 10ae22d5-761c-4b2e-ad0c-7e6bd3f919dc State: Peer Rejected (Disconnected)
- If all systems in the trusted storage pool are upgraded, the
gluster peer status
command displays peers as connected.The command displays the peer status information similar to the following:# gluster peer status Number of Peers: 3 Hostname: server2 Uuid: 2dde2c42-1616-4109-b782-dd37185702d8 State: Peer in Cluster (Connected) Hostname: server3 Uuid: 4224e2ac-8f72-4ef2-a01d-09ff46fb9414 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 10ae22d5-761c-4b2e-ad0c-7e6bd3f919dc State: Peer in Cluster (Connected)
- If all the volumes in the trusted storage pool are started, the
gluster volume info
command displays the volume status as started.Volume Name: volname Type: Distributed-Replicate Volume ID: d6274441-65bc-49f4-a705-fc180c96a072 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: server1:/rhs/brick1/brick1 Brick2: server2:/rhs/brick1/brick2 Brick3: server3:/rhs/brick1/brick3 Brick4: server4:/rhs/brick1/brick4 Options Reconfigured: geo-replication.indexing: on
- If you have a geo-replication setup, re-establish the geo-replication session between the master and slave using the following steps:
- Run the following commands on any one of the master nodes:
# cd /usr/share/glusterfs/scripts/ # sh generate-gfid-file.sh localhost:${master-vol} $PWD/get-gfid.sh /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt # scp /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt root@${slavehost}:/tmp/
- Run the following commands on a slave node:
# cd /usr/share/glusterfs/scripts/ # sh slave-upgrade.sh localhost:${slave-vol} /tmp/tmp.atyEmKyCjo/upgrade-gfid-values.txt $PWD/gsync-sync-gfid
Note
If the SSH connection for your setup requires a password, you will be prompted for a password for all machines where the bricks are residing. - Re-create and start the geo-replication sessions.For information on creating and starting geo-replication sessions, refer to Managing Geo-replication in the Red Hat Gluster Storage Administration Guide.
Note
It is recommended to add the child channel of Red Hat Enterprise Linux 6 containing the native client, so that you can refresh the clients and get access to all the new features in Red Hat Gluster Storage 3.1. For more information, refer to the Upgrading Native Client section in the Red Hat Gluster Storage Administration Guide. - Remount the volume to the client and verify for data consistency. If the gluster volume information and gluster peer status information matches with the information collected before migration, you have successfully upgraded your environment to Red Hat Gluster Storage 3.0.