After an OCP upgrade, starting pod with previously defined block volume gives error "Heuristic determination of mount point failed - input/output error"

Solution Verified - Updated -

Issue

After an async OCP upgrade, provisioning new block storage works fine and also mounting these newly defined block volumes have no problems. But starting pod with previously defined block volume gives this error:-

MountVolume.WaitForAttach failed for volume "pvc-19bbe6df-2b73-11e9-b29d-005056a01756" : Heuristic determination of mount point failed:stat /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/xx.xxx.xxx.xxx:3260-iqn.2016-12.org.gluster-block:e85c4109-b7fe-484b-a678-c1e5756b03a4-lun-0: input/output error

Unable to mount volumes for pod "nginx-14-v72gf_jenkins(cc1230cb-411b-11e9-8e74-005056a039a3)": timeout expired waiting for volumes to attach or mount for pod "jenkins"/"nginx-14-v72gf". list of unmounted volumes=[volume-p5gbn]. list of unattached volumes=[nginx-1 nginx-build-trigger-x7v8d volume-p5gbn default-token-9wd6s]

An "ls" of /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/ shows I/O errors:-

[root@node01 ~]# cd /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/iscsi/iface-default/
[root@node01 iface-default]# ls -l
ls: cannot access xx.xxx.xxx.xxx:3260-iqn.2016-12.org.gluster-block:8f649bcc-71d6-419d-893c-f74b404505d2-lun-0: Input/output error
ls: cannot access xx.xxx.xxx.xxx:3260-iqn.2016-12.org.gluster-block:e85c4109-b7fe-484b-a678-c1e5756b03a4-lun-0: Input/output error
total 4
???????????  ? ?    ?             ?            ? xx.xxx.xxx.xxx:3260-iqn.2016-12.org.gluster-block:e85c4109-b7fe-484b-a678-c1e5756b03a4-lun-0
???????????  ? ?    ?             ?            ? xx.xxx.xxx.xxx:3260-iqn.2016-12.org.gluster-block:8f649bcc-71d6-419d-893c-f74b404505d2-lun-0
drwxrwsr-x. 19 root 1003130000 4096 Mar 13 21:17 xx.xxx.xxx.xxx:3260-iqn.2016-12.org.gluster-block:ea4ff328-1860-4041-8164-7d60b2244512-lun-0

Environment

   OCP was upgraded from 3.10.89 to 3.10.111
   No update of OCS was done
   The pod that used the volume in question was force stopped  
   (oc delete pod --force --grace-period=0) as it did not stop gracefully

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content