Why slave node goes faulty with keyerror exception during history crawl for gluster geo-replica session in RHGS 3.1.3 ?
Issue
- Why non-root user geo-replication session status shows "Faulty" state in Red Hat Gluster Storage?
# gluster volume geo-replication geovol geouser@slave-node1::geovol status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
master-node1 ap_config /brick/brick_georepl_01 geoaccount geouser@slave-node1::ap_config N/A Faulty N/A N/A
master-node4 ap_config /brick/brick_georepl_04 geoaccount geouser@slave-node1::ap_config N/A Faulty N/A N/A
master-node3 ap_config /brick/brick_georepl_03 geoaccount geouser@slave-node1::ap_config N/A Faulty N/A N/A
master-node2 ap_config /brick/brick_georepl_02 geoaccount geouser@slave-node1::ap_config N/A Faulty N/A N/A
Environment
- Red Hat Gluster Storage 3.1
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
