Multipath errors reported when moving pod from one worker node to another while using IBM CSI driver in OCS cluster

Solution Verified - Updated -

Issue

  • IBM CSI driver fails to clear multipath device maps and uderlying subpaths after deleting
    pod.
  • Below errors logged when pod is scheduled to another worker node in OCS cluster
Aug 27 15:56:43 workernodeX multipathd: 67:224: path wwid appears to have changed. Using map wwid.  <<<
Aug 27 15:56:43 workernodeX multipathd: 68:0: path wwid appears to have changed. Using map wwid.
...
Aug 27 16:04:12 workernodeX hyperkube: E0827 16:04:12.535818    2873 xxx] Operation for "{volumeName:<ibm_csi>^SVC:6005076400810325900000000000016E podName: nodeName:}" failed. No retries permitted until 2021-08-27 16:06:14 (durationBeforeRetry 2m2s). Error: "UnmountDevice failed for volume \"pvc-6c7f2a96-862b-42f5-a1e2-31431e937f71\" (UniqueName: \"<ibm_csi>^SVC:6005076400810325900000000000016E\") on node \"worker2.tjcocp\" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = Internal desc = Multipath device [/dev/dm-16] was found as WWN [6005076400810325900000000000016e] via multipath -ll command, BUT sg_inq identify this device as a different WWN: [6005076400810325900000000000016b]. Check your multipathd."

Environment

  • Openshift Container Storage
  • Openshift Container Platform
  • IBM CSI Driver
  • IBM array with FCoE SAN

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content