How to reconfigure a faulty brick with another brick of same UUID in a distributed-dispersed volume

Solution Verified - Updated -

Issue

  • Need to replace a faulty brick with same UUID due to LV corruption in initial brick. The initial brick is offline in the volume.
# gluster volume status dev_backup

[output removed for online bricks]
Brick servera:/gluster/brick1/1       N/A      N/A      N      N/A
  • If the other bricks of the sub-volume is online in the gluster cluster, we can do a brick reset to reset the faulty brick. 'reset-brick' lets you replace a brick with another brick of the same location and UUID.

Environment

  • Red Hat Gluster Storage 3.5

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content