Splunk is not utilising memory
Issue
- Splunk is not utilising memory
- Splunk processes are swapped out while still large amount of free memory available
- Splunk processes in Cgroups are killed by the OOM-Killer
Jan 01 01:01:01 hostname kernel: splunkd invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
Jan 01 01:01:01 hostname kernel: splunkd cpuset=/ mems_allowed=0-1
Jan 01 01:01:01 hostname kernel: CPU: 22 PID: 12345 Comm: splunkd Not tainted 3.10.0-957.12.1.el7.x86_64 #1
Jan 01 01:01:01 hostname kernel: Call Trace:
Jan 01 01:01:01 hostname kernel: [<ffffffffbbd63021>] dump_stack+0x19/0x1b
Jan 01 01:01:01 hostname kernel: [<ffffffffbbd5da4a>] dump_header+0x90/0x229
Jan 01 01:01:01 hostname kernel: [<ffffffffbb7ba186>] ? find_lock_task_mm+0x56/0xc0
Jan 01 01:01:01 hostname kernel: [<ffffffffbb8315e8>] ? try_get_mem_cgroup_from_mm+0x28/0x60
Jan 01 01:01:01 hostname kernel: [<ffffffffbb7ba634>] oom_kill_process+0x254/0x3d0
Jan 01 01:01:01 hostname kernel: [<ffffffffbb9010ac>] ? selinux_capable+0x1c/0x40
Jan 01 01:01:01 hostname kernel: [<ffffffffbb8353c6>] mem_cgroup_oom_synchronize+0x546/0x570
Jan 01 01:01:01 hostname kernel: [<ffffffffbb834840>] ? mem_cgroup_charge_common+0xc0/0xc0
Jan 01 01:01:01 hostname kernel: [<ffffffffbb7baec4>] pagefault_out_of_memory+0x14/0x90
Jan 01 01:01:01 hostname kernel: [<ffffffffbbd5bf52>] mm_fault_error+0x6a/0x157
Jan 01 01:01:01 hostname kernel: [<ffffffffbbd707a8>] __do_page_fault+0x3c8/0x4f0
Jan 01 01:01:01 hostname kernel: [<ffffffffbbd70905>] do_page_fault+0x35/0x90
Jan 01 01:01:01 hostname kernel: [<ffffffffbbd6c758>] page_fault+0x28/0x30
Jan 01 01:01:01 hostname kernel: Task in /system.slice/Splunkd.service/workload_pool killed as a result of limit of /system.slice/Splunkd.service/workload_pool
Jan 01 01:01:01 hostname kernel: memory: usage 22075284kB, limit 22075284kB, failcnt 32304805
Jan 01 01:01:01 hostname kernel: memory+swap: usage 30446032kB, limit 9007199254740988kB, failcnt 0
Jan 01 01:01:01 hostname kernel: kmem: usage 0kB, limit 9007199254740988kB, failcnt 0
Jan 01 01:01:01 hostname kernel: Memory cgroup stats for /system.slice/Splunkd.service/workload_pool: cache:100KB rss:22075040KB rss_huge:0KB mapped_file:0KB swap:8370748KB inactive_anon:2193744KB active_anon:19879364KB inactive_file:16KB active_file:0KB unevictable:0KB
Environment
- Red Hat Enterprise Linux 7
- Splunk Enterprise
- Splunk indexers
- Large physical memory
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.