A kernel panic occurred: kernel BUG at fs/gfs2/glock.c:764!

Solution In Progress - Updated -

Issue

  • A gfs2 filesystem triggered a kernel panic:

     [2410321.414977] dlm: lvm_global: dlm_recover_rsbs 1 done
     [2410321.414985] dlm: lvm_global: dlm_recover 3 generation 29 done: 0 ms
     [2410321.890654] TCP: request_sock_TCP: Possible SYN flooding on port 0.0.0.0:61916. Sending cookies.
     [2410334.868942] dlm: closing connection to node 1
     [2410978.308017] gfs2: fsid=cluster9:data.1: lm_lock ret -22
     [2410978.308172] gfs2: fsid=cluster9:data.1: G:  s:DF n:2/42d781 f:lIqob t:EX d:EX/0 a:0 v:0 r:4 m:200
     [2410978.308274] gfs2: fsid=cluster9:data.1:  H: s:EX f:W e:0 p:2843138 [Thread-4 (Activ] gfs2_dirty_inode+0x1d8/0x270 [gfs2]
     [2410978.308431] gfs2: fsid=cluster9:data.1:  I: n:1332/4380545 t:8 f:0x00 d:0x00000000 s:10485760
     [2410978.308542] ------------[ cut here ]------------
     [2410978.308543] kernel BUG at fs/gfs2/glock.c:764!
     [2410978.308625] invalid opcode: 0000 [#1] SMP NOPTI
     [2410978.308694] CPU: 30 PID: 2843138 Comm: Thread-4 (Activ Kdump: loaded Tainted: P           OE     -------- -  - 4.18.0-553.16.1.el8_10.x86_64 #1
     [2410978.308847] Hardware name: HPE ProLiant DL380 Gen10/ProLiant DL380 Gen10, BIOS U30 02/22/2024
     [2410978.308941] RIP: 0010:do_xmote.cold.70+0x49/0x4b [gfs2]
     [2410978.309036] Code: fd c0 e8 88 b5 9a d4 ba 01 00 00 00 48 89 de 31 ff e8 95 c8 fd ff e9 2e e4 fd ff ba 01 00 00 00 48 89 de 31 ff e8 81 c8 fd ff <0f> 0b 4c 8b 74 24 08 49 8b 57 30 48 c7 c7 88 dc fd c0 49 81 c6 48
     [2410978.309234] RSP: 0018:ffffc1b04f2bba08 EFLAGS: 00010282
     [2410978.309336] RAX: 0000000000000000 RBX: ffff9c1260f97bb8 RCX: 0000000000000000
     [2410978.309443] RDX: 0000000000000000 RSI: ffff9c40ffb9e698 RDI: ffff9c40ffb9e698
     [2410978.309554] RBP: 0000000000000001 R08: 0000000000000000 R09: c0000000ffff7fff
     [2410978.309671] R10: 0000000000000001 R11: ffffc1b04f2bb5f0 R12: ffff9c14b653f000
     [2410978.309790] R13: 0000000000000000 R14: ffffffffc0fc7540 R15: ffff9c1260f97bd8
     [2410978.309911] FS:  00007fb6c28e8700(0000) GS:ffff9c40ffb80000(0000) knlGS:0000000000000000
     [2410978.310036] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
     [2410978.310162] CR2: 00007fbabfa016e0 CR3: 000000030c1b4001 CR4: 00000000007706e0
     [2410978.310293] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
     [2410978.310425] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
     [2410978.310556] PKRU: 55555554
     [2410978.310681] Call Trace:
     [2410978.310810]  ? __die_body+0x1a/0x60
     [2410978.310942]  ? die+0x2a/0x50
     [2410978.311071]  ? do_trap+0xe7/0x110
     [2410978.311199]  ? do_xmote.cold.70+0x49/0x4b [gfs2]
     [2410978.311342]  ? do_invalid_op+0x36/0x40
     [2410978.311473]  ? do_xmote.cold.70+0x49/0x4b [gfs2]
     [2410978.311620]  ? invalid_op+0x14/0x20
     [2410978.311756]  ? do_xmote.cold.70+0x49/0x4b [gfs2]
     [2410978.311900]  ? do_xmote.cold.70+0x49/0x4b [gfs2]
     [2410978.312045]  gfs2_glock_nq+0x242/0x430 [gfs2]
     [2410978.312195]  ? do_filldir_main.isra.23+0x190/0x190 [gfs2]
     [2410978.312360]  gfs2_dirty_inode+0x1e2/0x270 [gfs2]
     [2410978.312539]  ? gfs2_dirty_inode+0x1d8/0x270 [gfs2]
     [2410978.312692]  __mark_inode_dirty+0x1df/0x3b0
     [2410978.312842]  generic_update_time+0xba/0xd0
     [2410978.312989]  file_update_time+0xe5/0x130
     [2410978.313133]  gfs2_file_write_iter+0xfb/0x490 [gfs2]
     [2410978.313286]  ? cshook_security_file_permission+0x45/0x1140 [falcon_lsm_serviceable]
     [2410978.313442]  ? pinnedhook_security_file_permission+0x48/0x60 [falcon_lsm_pinned_17005]
     [2410978.313606]  aio_write+0xf6/0x1c0
     [2410978.313758]  ? __smp_call_single_queue+0xa1/0x140
     [2410978.313925]  ? io_submit_one+0x7e/0x3c0
     [2410978.314070]  ? kmem_cache_alloc+0x13f/0x280
     [2410978.314214]  io_submit_one+0x131/0x3c0
     [2410978.314355]  __x64_sys_io_submit+0xa2/0x180
     [2410978.314494]  sisevt_hook_sys_io_submit+0x70/0x2c9 [sisevt]
     [2410978.314639]  ? __ia32_compat_sys_io_submit+0x170/0x170
     [2410978.314774]  ? syscall_trace_enter+0x1ff/0x2d0
     [2410978.314909]  ? __x64_sys_futex+0x14e/0x200
     [2410978.315046]  sisevt64_sys_io_submit+0x1b/0x20 [sisevt]
     [2410978.315179]  ? do_syscall_64+0x5b/0x1a0
     [2410978.315303]  ? entry_SYSCALL_64_after_hwframe+0x66/0xcb
     [2410978.315430] Modules linked in: nf_tables nfnetlink mptcp_diag xsk_diag raw_diag unix_diag af_packet_diag netlink_diag udp_diag tcp_diag inet_diag gfs2 dlm falcon_lsm_serviceable(PE) falcon_nf_netcontain(E) falcon_kal(E) falcon_lsm_pinned_17005(E) team_mode_activebackup 8021q garp mrp stp llc team sisap(POE) sunrpc twnotify(OE) sisevt(PE) dm_queue_length dm_multipath intel_rapl_msr intel_rapl_common intel_uncore_frequency intel_uncore_frequency_common isst_if_common nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel rapl intel_cstate intel_uncore ipmi_ssif pcspkr mlx5_ib ib_uverbs ses enclosure joydev ib_core acpi_ipmi mei_me mei hpilo hpwdt ipmi_si ioatdma lpc_ich dca ipmi_devintf wmi ipmi_msghandler acpi_tad acpi_power_meter binfmt_misc xfs dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c sd_mod sr_mod cdrom sg lpfc mlx5_core nvmet_fc mgag200 drm_kms_helper nvmet nvme_fc syscopyarea sysfillrect sysimgblt
     [2410978.315477]  i2c_algo_bit drm_shmem_helper nvme_fabrics drm crc32c_intel ahci nvme_core smartpqi serio_raw libahci libata t10_pi mlxfw scsi_transport_sas scsi_transport_fc pci_hyperv_intf tls psample dm_mirror dm_region_hash dm_log dm_mod
     [2410978.316852] Red Hat flags: eBPF/event eBPF/rawtrace
    

Environment

  • Red Hat Enterprise Linux Server 8 (with the High Availability Add On and Resilient Storage Add Ons)
  • A Global Filesystem 2(gfs2)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content