A gfs2 filesystem withdraws with the error: function = ea_foreach_i, file = fs/gfs2/xattr.c, line = 86

Solution Unverified - Updated -

Issue

  • A gfs2 filesystem withdrew with the following error and call trace:
Jun 19 00:22:34 node1 kernel: gfs2: fsid=my_cluster:fs1.1: fatal: invalid metadata block#012  bh = 1506986631 (type: exp=10, found=4)#012  function = ea_foreach_i, file = fs/gfs2/xattr.c, line = 86
Jun 19 00:22:34 node1 kernel: gfs2: fsid=my_cluster:fs1.1: about to withdraw this file system
Jun 19 00:22:34 node1 kernel: gfs2: fsid=my_cluster:fs1.1: telling LM to unmount
Jun 19 00:22:34 node1 kernel: dlm: fs1: leaving the lockspace group...
Jun 19 00:22:34 node1 kernel: dlm: fs1: group event done 0 0
Jun 19 00:22:34 node1 kernel: dlm: fs1: release_lockspace final free
Jun 19 00:22:34 node1 kernel: gfs2: fsid=my_cluster:fs1.1: withdrawn
Jun 19 00:22:34 node1 kernel: CPU: 10 PID: 280637 Comm: ls Kdump: loaded Not tainted 4.18.0-193.6.3.el8_2.x86_64 #1
Jun 19 00:22:34 node1 kernel: Hardware name: BULL BullSequana S series/-, BIOS BIOS_PUR043.37.11.018 05/14/2020
Jun 19 00:22:34 node1 kernel: Call Trace:
Jun 19 00:22:34 node1 kernel: dump_stack+0x5c/0x80
Jun 19 00:22:34 node1 kernel: gfs2_lm_withdraw.cold.1+0xe9/0xf8 [gfs2]
Jun 19 00:22:34 node1 kernel: ? gfs2_getbuf+0xec/0x1a0 [gfs2]
Jun 19 00:22:34 node1 kernel: ? ea_alloc_skeleton+0x1a0/0x1a0 [gfs2]
Jun 19 00:22:34 node1 kernel: gfs2_metatype_check_ii+0x28/0x40 [gfs2]
Jun 19 00:22:34 node1 kernel: ea_foreach_i+0x155/0x170 [gfs2]
Jun 19 00:22:34 node1 kernel: ? ea_alloc_skeleton+0x1a0/0x1a0 [gfs2]
Jun 19 00:22:34 node1 kernel: ea_foreach+0x153/0x1e0 [gfs2]
Jun 19 00:22:34 node1 kernel: ? __switch_to_asm+0x35/0x70
Jun 19 00:22:34 node1 kernel: gfs2_ea_find+0x64/0x90 [gfs2]
Jun 19 00:22:34 node1 kernel: gfs2_xattr_get+0xdd/0x1c0 [gfs2]
Jun 19 00:22:34 node1 kernel: ? kmem_cache_free+0x18c/0x1b0
Jun 19 00:22:34 node1 kernel: __vfs_getxattr+0x53/0x70
Jun 19 00:22:34 node1 kernel: inode_doinit_use_xattr+0x63/0x170
Jun 19 00:22:34 node1 kernel: inode_doinit_with_dentry+0x2fc/0x480
Jun 19 00:22:34 node1 kernel: security_d_instantiate+0x2f/0x50
Jun 19 00:22:34 node1 kernel: d_splice_alias+0x4c/0x3c0
Jun 19 00:22:34 node1 kernel: ? init_wait_var_entry+0x40/0x40
Jun 19 00:22:34 node1 kernel: __gfs2_lookup+0xab/0x140 [gfs2]
Jun 19 00:22:34 node1 kernel: ? __gfs2_lookup+0x91/0x140 [gfs2]
Jun 19 00:22:34 node1 kernel: __lookup_slow+0x97/0x150
Jun 19 00:22:34 node1 kernel: lookup_slow+0x35/0x50
Jun 19 00:22:34 node1 kernel: walk_component+0x1bf/0x330
Jun 19 00:22:34 node1 kernel: path_lookupat.isra.49+0x75/0x200
Jun 19 00:22:34 node1 kernel: ? security_sid_to_context+0x21/0x30
Jun 19 00:22:34 node1 kernel: ? selinux_inode_getsecurity+0x83/0xe0
Jun 19 00:22:34 node1 kernel: filename_lookup.part.63+0xa0/0x170
Jun 19 00:22:34 node1 kernel: ? strncpy_from_user+0x4f/0x1b0
Jun 19 00:22:34 node1 kernel: vfs_statx+0x73/0xe0
Jun 19 00:22:34 node1 kernel: ? strncpy_from_user+0x4f/0x1b0
Jun 19 00:22:34 node1 kernel: __do_sys_newlstat+0x39/0x70
Jun 19 00:22:34 node1 kernel: ? syscall_trace_enter+0x1d3/0x2c0
Jun 19 00:22:34 node1 kernel: ? __audit_syscall_exit+0x249/0x2a0
Jun 19 00:22:34 node1 kernel: do_syscall_64+0x5b/0x1a0
Jun 19 00:22:34 node1 kernel: entry_SYSCALL_64_after_hwframe+0x65/0xca
Jun 19 00:22:34 node1 kernel: RIP: 0033:0x7f8166a6e009
Jun 19 00:22:34 node1 kernel: Code: 64 c7 00 16 00 00 00 b8 ff ff ff ff c3 0f 1f 40 00 f3 0f 1e fa 48 89 f0 83 ff 01 77 34 48 89 c7 48 89 d6 b8 06 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 07 c3 66 0f 1f 44 00 00 48 8b 15 49 fe 2c 00
Jun 19 00:22:34 node1 kernel: RSP: 002b:00007fffacbe1718 EFLAGS: 00000246 ORIG_RAX: 0000000000000006
Jun 19 00:22:34 node1 kernel: RAX: ffffffffffffffda RBX: 0000556a65da83c0 RCX: 00007f8166a6e009
Jun 19 00:22:34 node1 kernel: RDX: 0000556a65da83d8 RSI: 0000556a65da83d8 RDI: 00007fffacbe1720
Jun 19 00:22:34 node1 kernel: RBP: 00007fffacbe1ae0 R08: 0000000000000000 R09: 0000556a65d97f5a
Jun 19 00:22:34 node1 kernel: R10: 00007fffacbe172c R11: 0000000000000246 R12: 00007fffacbe1720
Jun 19 00:22:34 node1 kernel: R13: 0000000000000000 R14: 0000000000000005 R15: 0000556a65da83d8
Jun 19 00:22:34 node1 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-84 ino=1506986630
Jun 19 00:22:34 node1 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-84 ino=1506986630
  • A gfs2 filesystem withdrew with the following error and call trace:

     Jul 14 22:28:27 node42 kernel: gfs2: fsid=cluster9:data.1: fatal: invalid metadata block#012  bh = 15737217 (type: exp=10, found=4)#012  function = ea_foreach_i, file = fs/gfs2/xattr.c, line = 95
     Jul 14 22:28:27 node42 kernel: gfs2: fsid=cluster9:data.1: about to withdraw this file system
     Jul 14 22:28:32 node42 kernel: gfs2: fsid=cluster9:data.1: Requesting recovery of jid 1.
     Jul 14 22:28:33 node42 kernel: gfs2: fsid=cluster9:data.1: Journal recovery complete for jid 1.
     Jul 14 22:28:33 node42 kernel: gfs2: fsid=cluster9:data.1: Glock dequeues delayed: 0
     Jul 14 22:28:33 node42 kernel: gfs2: fsid=cluster9:data.1: telling LM to unmount
     Jul 14 22:28:33 node42 kernel: dlm: data: leaving the lockspace group...
     Jul 14 22:28:33 node42 kernel: gfs2: fsid=cluster9:data.1: recover_prep ignored due to withdraw.
     Jul 14 22:28:33 node42 kernel: dlm: data: group event done 0
     Jul 14 22:28:33 node42 kernel: dlm: data: release_lockspace final free
     Jul 14 22:28:33 node42 kernel: gfs2: fsid=cluster9:data.1: File system withdrawn
     Jul 14 22:28:33 node42 kernel: CPU: 0 PID: 8179 Comm: du Kdump: loaded Not tainted 5.14.0-427.40.1.el9_4.x86_64 #1
     Jul 14 22:28:33 node42 kernel: Hardware name: VMware, Inc. VMware7,1/440BX Desktop Reference Platform, BIOS VMW71.00V.24224532.B64.2408191458 08/19/2024
     Jul 14 22:28:33 node42 kernel: Call Trace:
     Jul 14 22:28:33 node42 kernel: <TASK>
     Jul 14 22:28:33 node42 kernel: dump_stack_lvl+0x34/0x48
     Jul 14 22:28:33 node42 kernel: gfs2_withdraw.cold+0xab/0xcd [gfs2]
     Jul 14 22:28:33 node42 kernel: ? __pfx_ea_find_i+0x10/0x10 [gfs2]
     Jul 14 22:28:33 node42 kernel: gfs2_metatype_check_ii+0x35/0x50 [gfs2]
     Jul 14 22:28:33 node42 kernel: ea_foreach_i+0x146/0x170 [gfs2]
     Jul 14 22:28:33 node42 kernel: ? __pfx_ea_find_i+0x10/0x10 [gfs2]
     Jul 14 22:28:33 node42 kernel: ea_foreach+0x126/0x1f0 [gfs2]
     Jul 14 22:28:33 node42 kernel: gfs2_ea_find+0x64/0x90 [gfs2]
     Jul 14 22:28:33 node42 kernel: gfs2_xattr_get+0x12c/0x1d0 [gfs2]
     Jul 14 22:28:33 node42 kernel: ? __kmem_cache_alloc_node+0x1c7/0x2d0
     Jul 14 22:28:33 node42 kernel: ? inode_doinit_use_xattr+0x35/0x180
     Jul 14 22:28:33 node42 kernel: __vfs_getxattr+0x50/0x70
     Jul 14 22:28:33 node42 kernel: inode_doinit_use_xattr+0x63/0x180
     Jul 14 22:28:33 node42 kernel: inode_doinit_with_dentry+0x196/0x510
     Jul 14 22:28:33 node42 kernel: ? perf_trace_gfs2_bmap+0xd7/0x140 [gfs2]
     Jul 14 22:28:33 node42 kernel: security_d_instantiate+0x2c/0x50
     Jul 14 22:28:33 node42 kernel: d_splice_alias+0x46/0x2b0
     Jul 14 22:28:33 node42 kernel: __gfs2_lookup+0x8a/0x130 [gfs2]
     Jul 14 22:28:33 node42 kernel: ? __lookup_slow+0x81/0x130
     Jul 14 22:28:33 node42 kernel: __lookup_slow+0x81/0x130
     Jul 14 22:28:33 node42 kernel: walk_component+0x158/0x1d0
     Jul 14 22:28:33 node42 kernel: path_lookupat+0x6e/0x1c0
     Jul 14 22:28:33 node42 kernel: ? xas_load+0x9/0xa0
     Jul 14 22:28:33 node42 kernel: filename_lookup+0xcf/0x1d0
     Jul 14 22:28:33 node42 kernel: ? list_lru_add+0x14e/0x190
     Jul 14 22:28:33 node42 kernel: ? _copy_to_user+0x1a/0x30
     Jul 14 22:28:33 node42 kernel: ? cp_new_stat+0x150/0x180
     Jul 14 22:28:33 node42 kernel: ? audit_filter_rules.constprop.0+0x2c5/0xd30
     Jul 14 22:28:33 node42 kernel: ? path_get+0x11/0x30
     Jul 14 22:28:33 node42 kernel: vfs_statx+0x8d/0x170
     Jul 14 22:28:33 node42 kernel: ? __audit_getname+0x2d/0x50
     Jul 14 22:28:33 node42 kernel: vfs_fstatat+0x54/0x70
     Jul 14 22:28:33 node42 kernel: __do_sys_newfstatat+0x26/0x60
     Jul 14 22:28:33 node42 kernel: ? exit_to_user_mode_loop+0xd0/0x130
     Jul 14 22:28:33 node42 kernel: ? exit_to_user_mode_prepare+0xec/0x100
     Jul 14 22:28:33 node42 kernel: ? auditd_test_task+0x3c/0x50
     Jul 14 22:28:33 node42 kernel: ? __audit_syscall_entry+0xef/0x140
     Jul 14 22:28:33 node42 kernel: ? syscall_trace_enter.constprop.0+0x126/0x1a0
     Jul 14 22:28:33 node42 kernel: do_syscall_64+0x59/0x90
     Jul 14 22:28:33 node42 kernel: ? do_syscall_64+0x69/0x90
     Jul 14 22:28:33 node42 kernel: ? do_syscall_64+0x69/0x90
     Jul 14 22:28:33 node42 kernel: entry_SYSCALL_64_after_hwframe+0x72/0xdc
     Jul 14 22:28:33 node42 kernel: RIP: 0033:0x7f57cc2fcf1e
     Jul 14 22:28:33 node42 kernel: Code: 48 89 f2 b9 00 01 00 00 48 89 fe bf 9c ff ff ff e9 07 00 00 00 0f 1f 80 00 00 00 00 f3 0f 1e fa 41 89 ca b8 06 01 00 00 0f 05 <3d> 00 f0 ff ff 77 0b 31 c0 c3 0f 1f 84 00 00 00 00 00 48 8b 15 c9
     Jul 14 22:28:33 node42 kernel: RSP: 002b:00007ffeb883f938 EFLAGS: 00000246 ORIG_RAX: 0000000000000106
     Jul 14 22:28:33 node42 kernel: RAX: ffffffffffffffda RBX: 000055cd22245d50 RCX: 00007f57cc2fcf1e
     Jul 14 22:28:33 node42 kernel: RDX: 000055cd22245dc0 RSI: 000055cd22245e50 RDI: 0000000000000007
     Jul 14 22:28:33 node42 kernel: RBP: 000055cd22245dc0 R08: 000055cd2254ae10 R09: 0000000000000000
     Jul 14 22:28:33 node42 kernel: R10: 0000000000000100 R11: 0000000000000246 R12: 000055cd22245d50
     Jul 14 22:28:33 node42 kernel: R13: 000000006874d919 R14: 000000002ee42a66 R15: 000055cd21b1e7f0
     Jul 14 22:28:33 node42 kernel: </TASK>
     Jul 14 22:28:33 node42 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-8 ino=15733357
     Jul 14 22:28:33 node42 kernel: SELinux: inode_doinit_use_xattr:  getxattr returned 5 for dev=dm-8 ino=15733357
    

Environment

  • Red Hat Enterprise Linux 7, 8, 9 (with the Resilient Storage Add-on)
  • A Global Filesystem 2 (GFS2) filesystem

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content