Brick sizes after rebalance aren't as uniform as expected in RHS
Issue
After adding two bricks to an existing replica 2 volume and performing a rebalance, the distribution of data as shown by the df command doesn't appear to be as uniform as expected.
# for i in {01..12}; do ssh node${i} 'df -h /rhs/brick1'; done
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 18T 9.0T 67% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 20T 7.1T 74% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 18T 9.2T 66% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 20T 7.7T 72% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 21T 6.3T 77% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 22T 5.6T 80% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 17T 10T 63% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 17T 10T 63% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 14T 14T 50% /rhs/brick1
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
27T 14T 14T 50% /rhs/brick1
Environment
- Red Hat Storage 2.0
- Red Hat Storage 2.1
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
