reservation conflicts in a RHEL 7 High Availability cluster using fence_scsi after adding a new storage device to the cluster

Solution Unverified - Updated -

Issue

  • I added a new device to the devices list on my fence_scsi stonith device and now I'm getting reservation conflicts in the logs and I/O errors when accessing that device.
  • After adding a new device to the working fence_scsi stonith device the device fails to register properly.
Apr 23 11:14:40 rhel7-node2 kernel: [69591.038343] sd 2:0:0:0: reservation conflict
Apr 23 11:14:40 rhel7-node2 kernel: [69591.042034] sd 2:0:0:0: [sdd] Unhandled error code
Apr 23 11:14:40 rhel7-node2 kernel: sd 2:0:0:0: reservation conflict
Apr 23 11:14:40 rhel7-node2 kernel: sd 2:0:0:0: [sdd] Unhandled error code
Apr 23 11:14:40 rhel7-node2 kernel: sd 2:0:0:0: [sdd]  
Apr 23 11:14:40 rhel7-node2 kernel: [69591.048040] sd 2:0:0:0: [sdd]  
Apr 23 11:14:40 rhel7-node2 kernel: [69591.054034] Result: hostbyte=DID_OK driverbyte=DRIVER_OK
Apr 23 11:14:40 rhel7-node2 kernel: Result: hostbyte=DID_OK driverbyte=DRIVER_OK
Apr 23 11:14:40 rhel7-node2 kernel: [69591.060041] sd 2:0:0:0: [sdd] CDB: 
Apr 23 11:14:40 rhel7-node2 kernel: sd 2:0:0:0: [sdd] CDB: 
Apr 23 11:14:40 rhel7-node2 kernel: [69591.063062] Write(10): 2a 00 00 00 00 48 00 00 10 00
Apr 23 11:14:40 rhel7-node2 kernel: [69591.072049] blk_update_request: 263 callbacks suppressed
Apr 23 11:14:40 rhel7-node2 kernel: Write(10): 2a 00 00 00 00 48 00 00 10 00
Apr 23 11:14:40 rhel7-node2 kernel: blk_update_request: 263 callbacks suppressed
Apr 23 11:14:40 rhel7-node2 kernel: [69591.078040] end_request: critical nexus error, dev sdd, sector 72
Apr 23 11:14:40 rhel7-node2 kernel: end_request: critical nexus error, dev sdd, sector 72
Apr 23 11:14:40 rhel7-node2 kernel: [69591.087043] device-mapper: multipath: Failing path 8:48.
Apr 23 11:14:40 rhel7-node2 kernel: device-mapper: multipath: Failing path 8:48.
  • We added a new storage device to our cluster, but fence_scsi hasn't unfenced it.
  • Why hasn't the cluster registered the node keys to a new device that we added when using fence_scsi?

Environment

  • Red Hat Enterprise Linux (RHEL) 7 with the High Availability Add On
  • fence-agents-scsi-4.0.11-11.el7 or later
  • One or more stonith devices configured to use fence_scsi as the agent
    • Device is configured with `meta provides="unfencing". If this is not configured, it could be an alternate explanation for the lack of unfencing.
  • A new storage device has been added to the device list or to a clustered volume group after some had already existed and been "unfenced"

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content