There are multiple different assertions for a gfs2 filesystem that are being logged related to quota

Solution In Progress - Updated -

Issue

There are multiple different assertions for a gfs2 filesystem that are being logged related to quota:

Apr 24 17:11:21 node761 kernel: gfs2: fsid=EDAP_P_cluster:saswork_05_fs.3: warning: assertion "!ip->i_qadata->qa_qd_num" failed at function = gfs2_quota_hold, file = fs/gfs2/quota.c, line = 572
Apr 24 17:11:21 node761 kernel: CPU: 0 PID: 4654 Comm: sas Kdump: loaded Tainted: P            E  ------------   3.10.0-1160.83.1.el7.x86_64 #1
Apr 24 17:11:21 node761 kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
Apr 24 17:11:21 node761 kernel: Call Trace:
Apr 24 17:11:21 node761 kernel: [<ffffffffacfb1bec>] dump_stack+0x19/0x1f
Apr 24 17:11:21 node761 kernel: [<ffffffffc06fdf11>] gfs2_assert_warn_i+0x81/0x110 [gfs2]
Apr 24 17:11:21 node761 kernel: [<ffffffffc06f1c71>] gfs2_quota_hold+0x171/0x1b0 [gfs2]
Apr 24 17:11:21 node761 kernel: [<ffffffffc06cd827>] punch_hole+0x567/0x1140 [gfs2]
Apr 24 17:11:21 node761 kernel: [<ffffffffac9d980e>] ? __pagevec_lookup+0x1e/0x30
Apr 24 17:11:21 node761 kernel: [<ffffffffac9da3e1>] ? truncate_inode_pages_range+0x2b1/0x750
Apr 24 17:11:21 node761 kernel: [<ffffffffc06ce5a8>] gfs2_iomap_end+0x1a8/0x1e0 [gfs2]
Apr 24 17:11:21 node761 kernel: [<ffffffffacacb55d>] iomap_apply+0xed/0x160
Apr 24 17:11:21 node761 kernel: [<ffffffffacacb671>] iomap_file_buffered_write+0xa1/0xe0
Apr 24 17:11:21 node761 kernel: [<ffffffffacacb070>] ? iomap_dirty_actor+0x1e0/0x1e0
Apr 24 17:11:21 node761 kernel: [<ffffffffc06e771f>] gfs2_file_aio_write+0x31f/0x340 [gfs2]
Apr 24 17:11:21 node761 kernel: [<ffffffffc047c8ae>] ? 0xffffffffc047c8ad
Apr 24 17:11:21 node761 kernel: [<ffffffffc05ca377>] ? cshook_systemcalltable_pre_compat_sys_ioctl+0x256c7/0x2e870 [falcon_lsm_serviceable]
Apr 24 17:11:21 node761 kernel: [<ffffffffc05ca399>] ? cshook_systemcalltable_pre_compat_sys_ioctl+0x256e9/0x2e870 [falcon_lsm_serviceable]
Apr 24 17:11:21 node761 kernel: [<ffffffffc058944e>] ? cshook_network_ops_inet6_sockraw_release+0x90de/0x1a830 [falcon_lsm_serviceable]
Apr 24 17:11:21 node761 kernel: [<ffffffffaca5ae73>] do_sync_write+0x93/0xe0
Apr 24 17:11:21 node761 kernel: [<ffffffffaca5b940>] vfs_write+0xc0/0x1f0
Apr 24 17:11:21 node761 kernel: [<ffffffffaca5c8c2>] SyS_pwrite64+0x92/0xc0
Apr 24 17:11:21 node761 kernel: [<ffffffffc046889f>] unload_network_ops_symbols+0x52df/0x73b0 [falcon_lsm_pinned_14203]
Apr 24 17:11:21 node761 kernel: [<ffffffffacfc539a>] system_call_fastpath+0x25/0x2a

Apr 25 02:30:32 node761 kernel: gfs2: fsid=EDAP_P_cluster:saswork_05_fs.3: warning: assertion "ip->i_qadata->qa_ref == 0" failed at function = gfs2_evict_inode, file = fs/gfs2/super.c, line = 1660
Apr 25 02:30:32 node761 kernel: CPU: 4 PID: 68137 Comm: cleanwork Kdump: loaded Tainted: P            E  ------------   3.10.0-1160.83.1.el7.x86_64 #1
Apr 25 02:30:32 node761 kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
Apr 25 02:30:32 node761 kernel: Call Trace:
Apr 25 02:30:32 node761 kernel: [<ffffffffacfb1bec>] dump_stack+0x19/0x1f
Apr 25 02:30:32 node761 kernel: [<ffffffffc06fdf11>] gfs2_assert_warn_i+0x81/0x110 [gfs2]
Apr 25 02:30:32 node761 kernel: [<ffffffffc06f9bb0>] gfs2_evict_inode+0x670/0x6c0 [gfs2]
Apr 25 02:30:32 node761 kernel: [<ffffffffaca7a644>] evict+0xb4/0x180
Apr 25 02:30:32 node761 kernel: [<ffffffffaca7aa7c>] iput+0xfc/0x190
Apr 25 02:30:32 node761 kernel: [<ffffffffaca6e446>] do_unlinkat+0x1b6/0x2e0
Apr 25 02:30:32 node761 kernel: [<ffffffffc0574e03>] ? cshook_security_file_free_security+0x6b23/0x7230 [falcon_lsm_serviceable]
Apr 25 02:30:32 node761 kernel: [<ffffffffc05f24ca>] ? cshook_network_ops_inet6_sockraw_recvmsg+0x1ec6a/0x227a0 [falcon_lsm_serviceable]
Apr 25 02:30:32 node761 kernel: [<ffffffffaca6f556>] SyS_unlink+0x16/0x20
Apr 25 02:30:32 node761 kernel: [<ffffffffc046a0d3>] unload_network_ops_symbols+0x6b13/0x73b0 [falcon_lsm_pinned_14203]
Apr 25 02:30:32 node761 kernel: [<ffffffffacfc539a>] system_call_fastpath+0x25/0x2a

Apr 19 04:16:26 node761 kernel: gfs2: fsid=EDAP_P_cluster:sasdata_21_fs.5: fatal: invalid metadata block#012  bh = 5555456579 (magic number)#012  function = gfs2_meta_indirect_buffer, file = fs/gfs2/meta_io.c, line = 429
Apr 19 04:16:26 node761 kernel: gfs2: fsid=EDAP_P_cluster:sasdata_21_fs.5: about to withdraw this file system
Apr 19 04:16:26 node761 kernel: gfs2: fsid=EDAP_P_cluster:sasdata_21_fs.5: warning: assertion "!qd->qd_change" failed at function = gfs2_quota_cleanup, file = fs/gfs2/quota.c, line = 1467
Apr 19 04:16:26 node761 kernel: CPU: 6 PID: 39888 Comm: kworker/6:8H Kdump: loaded Tainted: P            E  ------------   3.10.0-1160.83.1.el7.x86_64 #1
Apr 19 04:16:26 node761 kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
Apr 19 04:16:26 node761 kernel: Workqueue: glock_workqueue glock_work_func [gfs2]
Apr 19 04:16:26 node761 kernel: Call Trace:
Apr 19 04:16:26 node761 kernel: [<ffffffff84bb1bec>] dump_stack+0x19/0x1f
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a4df11>] gfs2_assert_warn_i+0x81/0x110 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a426f4>] gfs2_quota_cleanup+0x214/0x260 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a4b250>] gfs2_make_fs_ro+0x100/0x260 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a4db08>] gfs2_withdraw+0x1a8/0x4a0 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a4e13b>] gfs2_meta_check_ii+0x3b/0x50 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a341e1>] gfs2_meta_indirect_buffer+0xf1/0x160 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a2f524>] gfs2_inode_refresh+0x34/0x90 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffff844e8bb4>] ? update_curr+0x164/0x1f0
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a2f5f8>] inode_go_lock+0x78/0xf0 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a2aaea>] do_promote+0x1ba/0x320 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a2cfe8>] finish_xmote+0x178/0x4f0 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffffc0a2db02>] glock_work_func+0xd2/0x140 [gfs2]
Apr 19 04:16:26 node761 kernel: [<ffffffff844c31df>] process_one_work+0x17f/0x440
Apr 19 04:16:26 node761 kernel: [<ffffffff844c4326>] worker_thread+0x126/0x3c0
Apr 19 04:16:26 node761 kernel: [<ffffffff844c4200>] ? manage_workers.isra.26+0x2b0/0x2b0
Apr 19 04:16:26 node761 kernel: [<ffffffff844cb511>] kthread+0xd1/0xe0
Apr 19 04:16:26 node761 kernel: [<ffffffff844cb440>] ? insert_kthread_work+0x40/0x40
Apr 19 04:16:26 node761 kernel: [<ffffffff84bc51dd>] ret_from_fork_nospec_begin+0x7/0x21
Apr 19 04:16:26 node761 kernel: [<ffffffff844cb440>] ? insert_kthread_work+0x40/0x40
Apr 19 04:16:26 node761 kernel: gfs2: fsid=EDAP_P_cluster:sasdata_21_fs.5: Requesting recovery of jid 5.

Apr 14 13:58:20 node770 kernel: gfs2: fsid=EDAP_P_cluster:sasdata_21_fs.8: fatal: assertion "atomic_read(&sdp->sd_log_blks_free) <= sdp->sd_jdesc->jd_blocks" failed#012   function = log_refund, file = fs/gfs2/log.c, line = 876
Apr 14 13:58:20 node770 kernel: gfs2: fsid=EDAP_P_cluster:sasdata_21_fs.8: about to withdraw this file system

Environment

  • Red Hat Enterprise Linux Server 7 (with the High Availability Add On and Resilient Storage Add Ons)
  • A Global Filesystem 2(gfs2)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content