SCSI 'Reservation Conflict' errors while using HP service guard cluster with multipath devices

Solution Verified - Updated -

Issue

  • Huge reservation conflict error getting reported on passive node of HP service guard cluster

  • Following messages are getting logged frequently on the passive site of HP Serviceguard cluster. At the time of following errors, below node was in passive site and did not own the SCSI reservation on devices. Also, at the time of these errors, other node in cluster was in active site, it had SCSI reservations enabled on SAN devices and did not get following messages.

    The same behavior is observed on other node as well when cluster switch over takes place from passive site to active site:

    node1 kernel: [529679.405355] sd 1:0:8:0: [sdr] tag#0 CDB: Test Unit Ready 00 00 00 00 00 00
    node1 kernel: sd 1:0:8:0: [sdr] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
    node1 kernel: sd 1:0:8:0: [sdr] tag#0 CDB: Test Unit Ready 00 00 00 00 00 00
    node1 kernel: sd 1:0:9:0: [sdt] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
    node1 kernel: sd 1:0:9:0: [sdt] tag#0 CDB: Test Unit Ready 00 00 00 00 00 00
    node1 kernel: [529682.408352] sd 1:0:9:0: [sdt] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
    [...]
    [529707.224477] sd 1:0:9:1: reservation conflict
    [529707.248010] sd 1:0:9:0: reservation conflict
    [529707.248071] sd 1:0:8:0: reservation conflict
    [529707.248080] sd 1:0:8:1: reservation conflict
    [...]
    

Environment

  • Red Hat Enterprise Linux 6
  • Red Hat Enterprise Linux 7
  • HP Serviceguard cluster
  • HP OPEN-V SAN devices
  • DM-Multipath

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content