Why geo-replication status shows as "Hybrid Crawl" and then goes to "Faulty" state in Red Hat Gluster Storage 3.0 ?
Issue
- Why geo-replication status shows as "Hybrid Crawl" and then goes to "Faulty" state in Red Hat Gluster Storage 3.0 ?
~snippet from Master geo-replication status:
# gluster vol geo-rep master_vol <SLAVE_NODE_1>::slave_vol status
MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------------------
node1_master master_vol /bricks/data <SLAVE_NODE_1>::slave_vol Active N/A N/A
node1_master master_vol /bricks/data <SLAVE_NODE_1>::slave_vol Passive N/A N/A
# gluster vol geo-rep master_vol <SLAVE_NODE_1>::slave_vol status
MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------------------
node1_master master_vol /bricks/data <SLAVE_NODE_1>::slave_vol Hybrid Crawl N/A N/A
node1_master master_vol /bricks/data <SLAVE_NODE_1>::slave_vol Passive N/A N/A
# gluster vol geo-rep master_vol <SLAVE_NODE_1>::slave_vol status
MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------------------
node1_master master_vol /bricks/data <SLAVE_NODE_1>::slave_vol faulty N/A N/A
node1_master master_vol /bricks/data <SLAVE_NODE_1>::slave_vol Passive N/A N/A
- Files from master get replicated on the slave volume with 0 byte size and the session always stops after 39110 files ?
~Snippet from Master geo-replication logs:
# cat /var/log/glusterfs/geo-replication/geovol/ssh%3A%2F%2Froot%4010.65.209.255%3Agluster%3A%2F%2F127.0.0.1%3Arepvol.log
2015-05-04 15:31:46.269862] I [gsyncd(slave):635:main_i] <top>: syncing: gluster://localhost:a1278-acc-r2t01s01-georep
[2015-05-04 15:31:46.688821] I [gsyncd(slave):635:main_i] <top>: syncing: gluster://localhost:a1278-acc-r2t01s01-georep
[2015-05-04 15:31:47.303378] I [resource(slave):765:service_loop] GLUSTER: slave listening
[2015-05-04 15:31:47.724041] I [resource(slave):765:service_loop] GLUSTER: slave listening
[2015-05-04 15:32:28.830074] E [repce(slave):117:worker] <top>: call failed: <<============
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker
res = getattr(self.obj, rmeth)(*in_data[2:])
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 672, in meta_ops
atime = e['stat']['atime']
KeyError: 'atime'
[2015-05-08 15:42:23.61739] W [syncdutils(slave):480:errno_wrap] <top>: reached maximum retries (['.gfid/f41e58fd-2ad0-4836-8e8e-a4e247b1bb10', 'glusterfs.gfid.newfile', '\x00\x00\x0c\xcb\x00\x00\x0c\xcbaca025a5-713f-4fe0-899e-f2c12889a9b0\x00\x00\x00\x81\xb4train_19936.xml\x00\x00\x00\x01\xb4\x00\x00\x00\x00\x00\x00\x00\x00'])... <<=================
- Followed the correct prerequisite steps before configuring the geo-replication session as mentioned in the documentation as Prerequisites Steps for Configuring Geo-replication
Environment
- Red Hat Gluster Storage 3.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.