Red Hat Ceph Storage OSDs assert with 'FAILED assert(allow_eio || !m_filestore_fail_eio || got != -5)'
Issue
- OSDs in Red Hat Ceph Storage 1.3.x asserts with the following logs
2017-12-02 16:39:50.888858 7f883c3a2700 0 log_channel(cluster) log [INF] : 3.ca scrub starts
2017-12-02 16:39:50.890738 7f883c3a2700 0 log_channel(cluster) log [INF] : 3.ca scrub ok
2017-12-02 16:51:02.037603 7f883c3a2700 -1 os/FileStore.cc: In function 'virtual int FileStore::read(coll_t, const ghobject_t&, uint64_t, size_t, ceph::bufferlist&, uint32_t, bool)' thread 7f883c3a2700 time 2017-12-02 16:51:02.030581
os/FileStore.cc: 2854: FAILED assert(allow_eio || !m_filestore_fail_eio || got != -5)
ceph version 0.94.9-3.el7cp (7358f71bebe44c463df4d91c2770149e812bbeaa)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x85) [0xb11da5]
2: (FileStore::read(coll_t, ghobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, unsigned int, bool)+0xd1b) [0x8d6dfb]
3: (ReplicatedBackend::be_deep_scrub(hobject_t const&, unsigned int, ScrubMap::object&, ThreadPool::TPHandle&)+0x311) [0x96aa41]
4: (PGBackend::be_scan_list(ScrubMap&, std::vector<hobject_t, std::allocator<hobject_t> > const&, bool, unsigned int, ThreadPool::TPHandle&)+0x2e8) [0x89a118]
5: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x213) [0x7a5363]
6: (PG::replica_scrub(MOSDRepScrub*, ThreadPool::TPHandle&)+0x4c2) [0x7a5b32]
7: (OSD::RepScrubWQ::_process(MOSDRepScrub*, ThreadPool::TPHandle&)+0xbe) [0x6a189e]
8: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa76) [0xb02576]
9: (ThreadPool::WorkThread::entry()+0x10) [0xb03600]
10: (()+0x7dc5) [0x7f885da7cdc5]
11: (clone()+0x6d) [0x7f885c55f73d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Environment
- Red Hat Ceph Storage 1.3.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.