Why PVC from cephfs storage class fail to mount in OCS 4.x ?

Solution Verified - Updated -

Issue

  • registry pod stuck in ContainerCreating due to PVC fail to mount with Volume ID 0001-0011-openshift-storage-0000000000000001-aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee already exists.
  • ceph mgr logs reporting fs subvolume getpath command failing :

    -------------------------------- 
    log_channel(audit) log [DBG] : from='client.1234567 -' entity='client.csi-cephfs-node' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "ocs-storagecluster-cephfilesystem", "sub_name": "csi-vol-9999999-8888-7777-6666-55555544444", "group_name": "csi", "target": ["mgr", ""]}]: dispatch
    log_channel(audit) log [DBG] : from='client.1234567 -' entity='client.csi-cephfs-node' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "ocs-storagecluster-cephfilesystem", "sub_name": "csi-vol-9999999-8888-7777-6666-55555544444", "group_name": "csi", "target": ["mgr", ""]}]: dispatch
    
    2020-07-14T17:54:53.039570434Z debug 2020-07-14 17:54:53.038 7f7f17b8c700 -1 mgr.server reply reply (22) Invalid argument Traceback (most recent call last):
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 158, in get_fs_handle
    2020-07-14T17:54:53.039570434Z     conn.connect()
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 83, in connect
    2020-07-14T17:54:53.039570434Z     self.fs.init()
    2020-07-14T17:54:53.039570434Z   File "cephfs.pyx", line 663, in cephfs.LibCephFS.init
    2020-07-14T17:54:53.039570434Z cephfs.Error: error calling ceph_init: Connection timed out [Errno 110]
    2020-07-14T17:54:53.039570434Z 
    2020-07-14T17:54:53.039570434Z During handling of the above exception, another exception occurred:
    2020-07-14T17:54:53.039570434Z 
    2020-07-14T17:54:53.039570434Z Traceback (most recent call last):
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/mgr_module.py", line 974, in _handle_command
    2020-07-14T17:54:53.039570434Z     return self.handle_command(inbuf, cmd)
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/module.py", line 249, in handle_command
    2020-07-14T17:54:53.039570434Z     return handler(inbuf, cmd)
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/module.py", line 316, in _cmd_fs_subvolume_getpath
    2020-07-14T17:54:53.039570434Z     group_name=cmd.get('group_name', None))
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/fs/volume.py", line 205, in subvolume_getpath
    2020-07-14T17:54:53.039570434Z     with open_volume(self, volname) as fs_handle:
    2020-07-14T17:54:53.039570434Z   File "/lib64/python3.6/contextlib.py", line 81, in __enter__
    2020-07-14T17:54:53.039570434Z     return next(self.gen)
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 283, in open_volume
    2020-07-14T17:54:53.039570434Z     fs_handle = vc.connection_pool.get_fs_handle(volname)
    2020-07-14T17:54:53.039570434Z   File "/usr/share/ceph/mgr/volumes/fs/operations/volume.py", line 164, in get_fs_handle
    2020-07-14T17:54:53.039570434Z     raise VolumeException(-e.args[0], e.args[1])
    2020-07-14T17:54:53.039570434Z TypeError: bad operand type for unary -: 'str'
    2020-07-14T17:54:53.039570434Z 
    2020-07-14T17:54:53.043327448Z debug 2020-07-14 17:54:53.042 7f7f1838d700  0 log_channel(audit) log [DBG] : from='client.1234567 -' entity='client.csi-cephfs-node' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "ocs-storagecluster-cephfilesystem", "sub_name": "csi-vol-9999999-8888-7777-6666-55555544444", "group_name": "csi", "target": ["mgr", ""]}]: dispatch
    ------------------------------
    
  • ceph fs commands are getting hung and not respoding

Environment

  • Red Hat Ceph Storage 4.x
  • Red Hat Openshift Container Storage 4.x

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content