After stack update, volumes created in a manually created CEPH pool are inaccessible and any operation on that mount returns a "input/output error".

Solution In Progress - Updated -

Issue

  • After stack update we seen that volumes present in manually created SSD pools are inaccessible and getting below error when checked from VM.

  • The following error is seen in the VM console:

Jul 23 00:01:35 inviwb20kol1crdb38mv ntpd[7528]: 0.0.0.0 c011 01 freq_not_set
Jul 23 00:02:04 inviwb20kol1crdb38mv kernel: blk_update_request: I/O error, dev vdb, sector 262198507
Jul 23 00:02:04 inviwb20kol1crdb38mv kernel: XFS (vdb): metadata I/O error: block 0xfa0d4eb ("xlog_iodone") error 5 numblks 64
Jul 23 00:02:04 inviwb20kol1crdb38mv kernel: XFS (vdb): xfs_do_force_shutdown(0x2) called from line 1221 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffc01abc30
Jul 23 00:02:04 inviwb20kol1crdb38mv kernel: XFS (vdb): Log I/O Error Detected.  Shutting down filesystem
Jul 23 00:02:04 inviwb20kol1crdb38mv kernel: XFS (vdb): Please umount the filesystem and rectify the problem(s)
  • The following KCS article was followed in order to add a manually created volumes_ssd pool.

Environment

  • Red Hat OpenStack Platform 10.0 (RHOSP)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content