Why slave node goes faulty with keyerror exception during history crawl for gluster geo-replica session in RHGS 3.1.3 ?

Solution Verified - Updated -

Issue

  • Why non-root user geo-replication session status shows "Faulty" state in Red Hat Gluster Storage?
# gluster volume geo-replication geovol geouser@slave-node1::geovol status

MASTER NODE                MASTER VOL    MASTER BRICK               SLAVE USER    SLAVE                                      SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master-node1    ap_config        /brick/brick_georepl_01    geoaccount       geouser@slave-node1::ap_config    N/A           Faulty    N/A             N/A                  
master-node4    ap_config        /brick/brick_georepl_04    geoaccount       geouser@slave-node1::ap_config    N/A           Faulty    N/A             N/A                  
master-node3    ap_config        /brick/brick_georepl_03    geoaccount       geouser@slave-node1::ap_config    N/A           Faulty    N/A             N/A                  
master-node2    ap_config        /brick/brick_georepl_02    geoaccount       geouser@slave-node1::ap_config    N/A           Faulty    N/A             N/A 

Environment

  • Red Hat Gluster Storage 3.1

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content