A kernel panic occur on cluster with gfs2 filesystems mounted: Assertion failed on line 247 of file fs/dlm/lock.c

Solution In Progress - Updated -

Issue

A kernel panic occurred on a cluster with gfs2 filesystems mounted:

[  224.677672] sd 8:0:0:2: Parameters changed
[  224.835606] dlm: connecting to 1
[  224.836997] dlm: node 7: socket error sending to node 1, port 21064, sk_err=104/0
[  225.848068] dlm: closing connection to node 1
[  225.848839] dlm: closing connection to node 8
[  225.849566] dlm: closing connection to node 7
[  225.850180] dlm: closing connection to node 6
[  225.850748] dlm: closing connection to node 5
[  225.851280] dlm: closing connection to node 3
[  225.851807] dlm: closing connection to node 2
[  225.852338] dlm: closing connection to node 4
[  225.855555] dlm: data_tlktools: no userland control daemon, stopping lockspace
[  225.856136] dlm: shared_work: no userland control daemon, stopping lockspace
[  225.856659] dlm: shared_data: no userland control daemon, stopping lockspace
[  225.857241] dlm: shared: no userland control daemon, stopping lockspace
[  225.857755] dlm: clvmd: no userland control daemon, stopping lockspace
[  225.858354] dlm: dlm user daemon left 6 lockspaces
[  228.732753] sd 10:0:0:2: reservation conflict
[  228.732771] sd 8:0:0:2: reservation conflict
[  228.732778] sd 8:0:0:2: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[  228.732792] sd 8:0:0:2: [sdc] tag#0 CDB: Write(10) 2a 00 39 48 15 68 00 00 08 00
[  228.732793] blk_update_request: critical nexus error, dev sdc, sector 961025384
[  228.732799] blk_update_request: critical nexus error, dev dm-3, sector 961025384
[  228.736107] sd 10:0:0:2: [sdg] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[  228.736620] sd 10:0:0:2: [sdg] tag#0 CDB: Write(10) 2a 00 00 15 2e c0 00 00 38 00
[  228.737127] blk_update_request: critical nexus error, dev sdg, sector 1388224
[  228.737651] blk_update_request: critical nexus error, dev dm-3, sector 1388224
[  228.738138] GFS2: fsid=rhel7cluster:shared_data.5: Error -52 writing to journal, jid=5
[  228.738689] GFS2: fsid=rhel7cluster:shared_data.5: error -52: withdrawing the file system to prevent further damage.   <---
[  228.739198] GFS2: fsid=rhel7cluster:shared_data.5: about to withdraw this file system
[  228.740671] sd 8:0:1:2: reservation conflict
[  228.741175] sd 8:0:1:2: [sde] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[  228.741682] sd 8:0:1:2: [sde] tag#0 CDB: Write(10) 2a 00 00 15 2e f8 00 00 08 00
[  228.742184] blk_update_request: critical nexus error, dev sde, sector 1388280
[  228.742683] blk_update_request: critical nexus error, dev dm-3, sector 1388280
[  228.743163] GFS2: fsid=rhel7cluster:shared_data.5: Error -52 writing to journal, jid=5
[  228.745590] sd 10:0:1:2: reservation conflict
[  228.746051] sd 10:0:1:2: [sdi] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[  228.746523] sd 10:0:1:2: [sdi] tag#0 CDB: Write(10) 2a 00 38 c8 15 e0 00 00 08 00
[  228.746989] blk_update_request: critical nexus error, dev sdi, sector 952636896
[  228.747469] blk_update_request: critical nexus error, dev dm-3, sector 952636896
[  228.747941] Buffer I/O error on dev dm-18, logical block 119079100, lost async page write     <---
[  228.750790] GFS2: fsid=rhel7cluster:shared_data.5: telling LM to unmount
[  228.751378]
               DLM:  Assertion failed on line 247 of file fs/dlm/lock.c
               DLM:  assertion:  "r->res_nodeid >= 0"
               DLM:  time = 4294896196
[  228.753660] rsb: nodeid -1 master 0 dir 2 flags 0 first 0 rlc 0 name        5         7190212
[  228.754160]
[  228.754660] ------------[ cut here ]------------
[  228.755146] kernel BUG at fs/dlm/lock.c:247!
[  228.755622] invalid opcode: 0000 [#1] SMP

Environment

  • Red Hat Enterprise Linux Server 5 (with Clustering and Cluster Storage)
  • Red Hat Enterprise Linux Server 6, 7 (with the High Availability Add On and Resilient Storage Add Ons)
  • A Global Filesystem 2(gfs2)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content