Gluster geo-replication status always faulty and data is not synced

Solution Verified - Updated -

Issue

When trying to setup a new gluster geo-replicated volume the status is always faulty and data never becomes synced. Following error will show up in the gluster geo-replication log for the volume in question:

[2018-05-11 17:39:47.886759] E [master(/bricks/brick1/brickdir):474:should_crawl] _GMaster: Meta-volume is not mounted. Worker Exiting...

The status of the volume will show the following:

MASTER NODE      MASTER VOL    MASTER BRICK            SLAVE USER    SLAVE                           SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
-------------------------------------------------------------------------------------------------------------------------------------------------------
node1            gv0           /bricks/brick1/brickdir    root          ssh://node3::backup-vol      N/A           Faulty    N/A             N/A                  
node2            gv0           /bricks/brick2/brickdir    root          ssh://node3::backup-vol      N/A           Faulty    N/A             N/A

Environment

  • Red Hat Gluster Storage 3.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.