8.5. Migrating Volumes
Note
replace-brick operation, review the known issues related to replace-brick operation in the Red Hat Storage 3.0 Release Notes.
8.5.1. Replacing a Subvolume on a Distribute or Distribute-replicate Volume
- Add the new bricks to the volume.
#
gluster volume add-brick VOLNAME [<stripe|replica> <COUNT>] NEW-BRICKExample 8.1. Adding a Brick to a Distribute Volume
#
gluster volume add-brick test-volume server5:/exp5Add Brick successful - Verify the volume information using the command:
#
gluster volume infoVolume Name: test-volume Type: Distribute Status: Started Number of Bricks: 5 Bricks: Brick1: server1:/exp1 Brick2: server2:/exp2 Brick3: server3:/exp3 Brick4: server4:/exp4 Brick5: server5:/exp5Note
In case of a Distribute-replicate or stripe volume, you must specify the replica or stripe count in theadd-brickcommand and provide the same number of bricks as the replica or stripe count to theadd-brickcommand. - Remove the bricks to be replaced from the subvolume.
- Start the
remove-brickoperation using the command:# gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> startExample 8.2. Start a remove-brick operation on a distribute volume
#gluster volume remove-brick test-volume server2:/exp2 startRemove Brick start successful - View the status of the
remove-brickoperation using the command:# gluster volume remove-brick VOLNAME [replica <COUNT>] BRICK statusExample 8.3. View the Status of remove-brick Operation
# gluster volume remove-brick test-volume server2:/exp2 statusNode Rebalanced-files size scanned failures status ------------------------------------------------------------------ server2 16 16777216 52 0 in progressKeep monitoring theremove-brickoperation status by executing the above command. When the value of the status field is set tocompletein the output ofremove-brickstatus command, proceed further. - Commit the
remove-brickoperation using the command:#gluster volume remove-brick VOLNAME [replica <COUNT>] <BRICK> commitExample 8.4. Commit the remove-brick Operation on a Distribute Volume
#gluster volume remove-brick test-volume server2:/exp2 commit - Verify the volume information using the command:
# gluster volume infoVolume Name: test-volume Type: Distribute Status: Started Number of Bricks: 4 Bricks: Brick1: server1:/exp1 Brick3: server3:/exp3 Brick4: server4:/exp4 Brick5: server5:/exp5 - Verify the content on the brick after committing the
remove-brickoperation on the volume. If there are any files leftover, copy it through FUSE or NFS mount.- Verify if there are any pending files on the bricks of the subvolume.Along with files, all the application-specific extended attributes must be copied. glusterFS also uses extended attributes to store its internal data. The extended attributes used by glusterFS are of the form
trusted.glusterfs.*,trusted.afr.*, andtrusted.gfid. Any extended attributes other than ones listed above must also be copied.To copy the application-specific extended attributes and to achieve a an effect similar to the one that is described above, use the following shell script:Syntax:# copy.sh <glusterfs-mount-point> <brick>Example 8.5. Code Snippet Usage
If the mount point is/mnt/glusterfsand brick path is/export/brick1, then the script must be run as:# copy.sh /mnt/glusterfs /export/brick#!/bin/bash MOUNT=$1 BRICK=$2 for file in `find $BRICK ! -type d`; do rpath=`echo $file | sed -e "s#$BRICK\(.*\)#\1#g"` rdir=`dirname $rpath` cp -fv $file $MOUNT/$rdir; for xattr in `getfattr -e hex -m. -d $file 2>/dev/null | sed -e '/^#/d' | grep -v -E "trusted.glusterfs.*" | grep -v -E "trusted.afr.*" | grep -v "trusted.gfid"`; do key=`echo $xattr | cut -d"=" -f 1` value=`echo $xattr | cut -d"=" -f 2` setfattr $MOUNT/$rpath -n $key -v $value done done - To identify a list of files that are in a split-brain state, execute the command:
#gluster volume heal test-volume info - If there are any files listed in the output of the above command, delete those files from the mount point and manually retain the correct copy of the file after comparing the files across the bricks in a replica set. Selecting the correct copy of the file needs manual intervention by the System Administrator.
8.5.2. Replacing an Old Brick with a New Brick on a Replicate or Distribute-replicate Volume
- Ensure that the new brick (
sys5:/home/gfs/r2_5) that replaces the old brick (sys0:/home/gfs/r2_0) is empty. Ensure that all the bricks are online. The brick that must be replaced can be in an offline state. - Bring the brick that must be replaced to an offline state, if it is not already offline.
- Identify the PID of the brick to be replaced, by executing the command:
#
gluster volume statusStatus of volume: r2 Gluster process Port Online Pid ------------------------------------------------------- Brick sys0:/home/gfs/r2_0 49152 Y 5342 Brick sys1:/home/gfs/r2_1 49153 Y 5354 Brick sys2:/home/gfs/r2_2 49154 Y 5365 Brick sys3:/home/gfs/r2_3 49155 Y 5376 - Log in to the host on which the brick to be replaced has its process running and kill the brick.
#kill -9 <PID> - Ensure that the brick to be replaced is offline and the other bricks are online by executing the command:
# gluster volume statusStatus of volume: r2 Gluster process Port Online Pid ------------------------------------------------------ Brick sys0:/home/gfs/r2_0 N/A N 5342 Brick sys1:/home/gfs/r2_1 49153 Y 5354 Brick sys2:/home/gfs/r2_2 49154 Y 5365 Brick sys3:/home/gfs/r2_3 49155 Y 5376
- Create a FUSE mount point from any server to edit the extended attributes. Using the NFS and CIFS mount points, you will not be able to edit the extended attributes.
- Perform the following operations to change the Automatic File Replication extended attributes so that the heal process happens from the other brick (
sys1:/home/gfs/r2_1) in the replica pair to the new brick (sys5:/home/gfs/r2_5).Note that/mnt/r2is the FUSE mount path.- Create a new directory on the mount point and ensure that a directory with such a name is not already present.
#
mkdir /mnt/r2/<name-of-nonexistent-dir> - Delete the directory and set the extended attributes.
#
rmdir /mnt/r2/<name-of-nonexistent-dir>#
setfattr -n trusted.non-existent-key -v abc /mnt/r2#setfattr -x trusted.non-existent-key /mnt/r2 - Ensure that the extended attributes on the other bricks in the replica (in this example,
trusted.afr.r2-client-0) is not set to zero.#
getfattr -d -m. -e hex /home/gfs/r2_1 # file: home/gfs/r2_1security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 trusted.afr.r2-client-0=0x000000000000000300000002 trusted.afr.r2-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440
- Execute the
replace-brickcommand with theforceoption:#
gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit forcevolume replace-brick: success: replace-brick commit successful - Check if the new brick is online.
#
gluster volume statusStatus of volume: r2 Gluster process Port Online Pid --------------------------------------------------------- Brick sys5:/home/gfs/r2_5 49156 Y 5731 Brick sys1:/home/gfs/r2_1 49153 Y 5354 Brick sys2:/home/gfs/r2_2 49154 Y 5365 Brick sys3:/home/gfs/r2_3 49155 Y 5376 - Ensure that after the self-heal completes, the extended attributes are set to zero on the other bricks in the replica.
#
getfattr -d -m. -e hex /home/gfs/r2_1getfattr: Removing leading '/' from absolute path names # file: home/gfs/r2_1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000 trusted.afr.r2-client-0=0x000000000000000000000000 trusted.afr.r2-client-1=0x000000000000000000000000 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe trusted.glusterfs.volume-id=0xde822e25ebd049ea83bfaa3c4be2b440Note that in this example, the extended attributestrusted.afr.r2-client-0andtrusted.afr.r2-client-1are set to zero.
8.5.3. Replacing an Old Brick with a New Brick on a Distribute Volume
Important
- Replace a brick with a commit
forceoption:# gluster volume replace-brick VOLNAME <BRICK> <NEW-BRICK> commit forceExample 8.6. Replace a brick on a Distribute Volume
# gluster volume replace-brick r2 sys0:/home/gfs/r2_0 sys5:/home/gfs/r2_5 commit forcevolume replace-brick: success: replace-brick commit successful - Verify if the new brick is online.
# gluster volume statusStatus of volume: r2 Gluster process Port Online Pid --------------------------------------------------------- Brick sys5:/home/gfs/r2_5 49156 Y 5731 Brick sys1:/home/gfs/r2_1 49153 Y 5354 Brick sys2:/home/gfs/r2_2 49154 Y 5365 Brick sys3:/home/gfs/r2_3 49155 Y 5376
Note
replace-brick command options except the commit force option are deprecated.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.