You can shrink volumes while the trusted storage pool is online and available. For example, you might need to remove a brick that has become inaccessible in a distributed volume due to hardware or network failure.
Note
Data residing on the brick that you are removing will no longer be accessible at the glusterFS mount point if you run the command with
force or without any option. With start option, the data gets migrated to the other bricks and only the configuration information is removed - you can continue to access the data directly from the brick.
When shrinking distributed replicated and distributed striped volumes, you need to remove a number of bricks that is a multiple of the replica or stripe count. For example, to shrink a distributed striped volume with a stripe count of 2, you need to remove bricks in multiples of 2 (such as 4, 6, 8, etc.). In addition, the bricks you are trying to remove must be from the same sub-volume (the same replica or stripe set). In a non-replicated volume, all bricks should be up to perform remove brick operation (to migrate data). In a replicated volume, at least one of the brick in the replica should be up.
To shrink a volume
- Remove the brick using the following command:
# gluster volume remove-brickVOLNAMEBRICKstartFor example, to remove server2:/exp2:# gluster volume remove-brick test-volume server2:/exp2 start Remove Brick start successful
- (Optional) View the status of the remove brick operation using the following command:
# gluster volume remove-brickVOLNAMEBRICKstatusFor example, to view the status of remove brick operation on server2:/exp2 brick:# gluster volume remove-brick test-volume server2:/exp2 status Node Rebalanced-files size scanned failures status --------- ----------- ----------- ----------- ----------- ------------ localhost 16 16777216 52 0 in progress 192.168.1.1 13 16723211 47 0 in progress - When the data migration is complete and when the
gluster volume remove-brickVOLNAMEBRICKstatuscommand displays the status as Completed, run the following command:# gluster volume remove-brickVOLNAME BRICKcommitFor example,# gluster volume remove-brick test-volume server2:/exp2 commit - Enter
yto confirm the operation. The command displays the following message indicating that the remove brick operation is successfully started:Remove Brick successful
- Check the volume information using the following command:
# gluster volume infoThe command displays information similar to the following:# gluster volume info Volume Name: test-volume Type: Distribute Status: Started Number of Bricks: 3 Bricks: Brick1: server1:/exp1 Brick3: server3:/exp3 Brick4: server4:/exp4
Important
Stopping remove-brick operation is a technology preview feature. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
You can cancel remove-brick operation. After executing a remove-brick operation, you can choose to stop the remove-brick operation by executing
stop command. The files which are already migrated during remove-brick operation, will not be migrated back to the same brick.
To stop remove brick operation
- Stop the remove brick operation using the following command:
# gluster volume remove-brickVOLNAME BRICKstopFor example:# gluster volume rebalance test-volume stop Node Rebalanced-files size scanned status --------- ---------------- ---- ------- ----------- 617c923e-6450-4065-8e33-865e28d9428f 59 590 244 stopped Stopped rebalance process on volume test-volume