Memory leak on nodes running collectd due to unbounded memory usage in collectd when it's not configured with any write plugin - Red Hat OpenStack Platform 13

Solution In Progress - Updated -

Issue

Memory leak on nodes running collectd due to unbounded memory usage in collectd when it's not configured with any write plugin - Red Hat OpenStack Platform 13

This issue can affect any node where collectd is running at.

Example for Ceph nodes affected by this

  • One ceph storage node ran out of memory:
[root@overcloud-ceph-0 log]# sar -r 1 -f /var/log/sa/sa24
Linux 3.10.0-1062.1.1.el7.x86_64 (999711-ceph02)    02/24/2020  _x86_64_    (48 CPU)

12:00:02 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact   kbdirty
12:10:02 AM  10811524 186775740     94.53     43000    980060  97456840     49.32 180205700    574896       204
12:20:01 AM  10738280 186848984     94.57     42776    978700  97174824     49.18 180381348    572824       200
12:30:01 AM  10619816 186967448     94.63     42676    981852  97302932     49.25 180588648    574504       184
12:40:01 AM  10270824 187316440     94.80     42592    980484  97251020     49.22 180835588    572588       204
12:50:01 AM  10003108 187584156     94.94     42576    989896  97289232     49.24 181077328    579528       196
01:00:01 AM   9855424 187731840     95.01     42556    984660  97162104     49.17 181299964    574244       152
01:10:01 AM   9574856 188012408     95.15     43064    986780  97502376     49.35 181569564    575276         8
01:20:02 AM   9380904 188206360     95.25     43052    985276  97511204     49.35 181770604    573344         8
01:30:01 AM   9318656 188268608     95.28     42840    992892  96973140     49.08 181932196    578268       124
01:40:01 AM   8968112 188619152     95.46     42836    990940  97153036     49.17 182135996    576020         0
01:50:01 AM   8790300 188796964     95.55     42820    989436  97287920     49.24 182397460    573816        12
02:00:01 AM   8591808 188995456     95.65     42784    992540  97571700     49.38 182598408    575448         8
02:10:01 AM   8476032 189111232     95.71     43140    995988  97313064     49.25 182804264    577028        32
02:20:01 AM   8030788 189556476     95.94     43040    994332  97511088     49.35 183030060    574900         4
02:30:01 AM   8064256 189523008     95.92     42976    995872  97329468     49.26 183211796    575860         4
02:40:02 AM   7817952 189769312     96.04     42844    995068  97349824     49.27 183430076    574036         4
02:50:01 AM   7475680 190111584     96.22     42828    998168  97558468     49.37 183626612    575320        16
03:00:01 AM   7258436 190328828     96.33     42812    996060  97570740     49.38 183857000    572752         4
03:10:01 AM   7064440 190522824     96.42     43092   1003632  97719820     49.46 184116464    577776         8
03:20:01 AM   6810820 190776444     96.55     43092   1001068  97770376     49.48 184336860    575264        12
03:30:01 AM   6603232 190984032     96.66     43028   1003480  97941212     49.57 184526356    576568         4
03:40:01 AM   6445556 191141708     96.74     42996   1006948  97910928     49.55 184681476    579004         4
03:50:02 AM   6416556 191170708     96.75     42804    877836  97482648     49.34 184744996    593692        16
04:00:01 AM   6232892 191354372     96.85     42708    875880  97499352     49.34 184952620    591500       160
04:10:01 AM   6121388 191465876     96.90     42964    879900  97745304     49.47 185143336    594812         0
04:20:01 AM   6706372 190880892     96.61      4836    191412  97507252     49.35 185089680    131640        28
04:30:01 AM   6377360 191209904     96.77      8312    395140  97246688     49.22 185341104    221596         8
04:40:01 AM   6299512 191287752     96.81      8312    408000  97239388     49.21 185415596    229144       168
04:50:01 AM   8537324 189049940     95.68     10676    424948  95337092     48.25 182968292    241304         0
05:00:02 AM   8132648 189454616     95.88     10672    424168  95569824     48.37 183345512    240408         4
05:10:01 AM   7974252 189613012     95.96     30084    430368  95292196     48.23 183564256    259604       756
05:20:01 AM   7758556 189828708     96.07     30004    423580  95346912     48.26 183820624    253196       760
05:30:01 AM   7343460 190243804     96.28     30004    425704  95543240     48.35 184033864    254440       132
05:40:01 AM   7210348 190376916     96.35     30004    429712  95653512     48.41 184242452    256352        12
05:50:02 AM   7111008 190476256     96.40     29972    435124  95374764     48.27 184455076    261312         0
06:00:01 AM   6753644 190833620     96.58     29904    432796  95439872     48.30 184706252    258688        12
06:10:01 AM   6663980 190923284     96.63     29988    439508  95147496     48.15 184924228    245252         0
06:20:01 AM   6419080 191168184     96.75     29968    434364  95232840     48.20 185159940    240388         4
06:30:01 AM   6199224 191388040     96.86     29832    440428  95200208     48.18 185318324    244328         4
06:40:01 AM   7264780 190322484     96.32         0    206520  93275656     47.21 184546104    171564       124
06:50:01 AM  55748332 141838932     71.79      2384    215424  94860692     48.01 136025016    179660       160

06:50:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact   kbdirty
07:00:01 AM  55639736 141947528     71.84      2384    212156  94847212     48.00 136106124    175720       172
07:10:01 AM  55477772 142109492     71.92     29072    214260  94582800     47.87 136219556    200196       188
07:20:01 AM  54034112 143553152     72.65     29008    216656  95396752     48.28 136380448    201144       152
07:30:01 AM  54995248 142592016     72.17     28960    219244  94886944     48.02 136426624    203584        88
07:40:01 AM  55081528 142505736     72.12     28960    381500  94884908     48.02 136453644    361952       200
07:50:01 AM  55104288 142482976     72.11     28784    378812  94884932     48.02 136475436    359076       192
08:00:02 AM  55109436 142477828     72.11     28784    380844  94884628     48.02 136529760    360312       288
08:10:01 AM  54870768 142716496     72.23     28780    380148  94804568     47.98 136639008    335388       184
08:20:01 AM  54999376 142587888     72.16     28776    382060  94641564     47.90 136823544    336168       160
08:30:01 AM  54903184 142684080     72.21     28612    387728  94649848     47.90 136918812    339908       184
08:40:01 AM  54563340 143023924     72.39     28608    383816  94968228     48.06 137049928    336312       168
08:50:01 AM  54071444 143515820     72.63     28608    384352  94864728     48.01 137062280    336460       172
09:00:01 AM  52924192 144663072     73.21     28600    387876  95511624     48.34 137231668    337980       192
09:10:01 AM  54484516 143102748     72.43     29092    395304  94715400     47.94 137330196    345280       188
09:20:01 AM  54446328 143140936     72.44     28960    400212  94715024     47.94 137374336    348696       160
09:30:01 AM  54361116 143226148     72.49     28816    401576  94905324     48.03 137465808    349752       156
09:40:01 AM  54187700 143399564     72.58     28816    406892  94937756     48.05 137636060    353700       176
09:50:01 AM  54111172 143476092     72.61     28792    404408  94937660     48.05 137715004    350708       212
10:00:01 AM  54045612 143541652     72.65     28636    410700  94938416     48.05 137782192    355212       176
10:10:01 AM  53949492 143637772     72.70     28836    408496  94946504     48.05 137873456    352552       148
10:20:01 AM  53860072 143727192     72.74     28580    408904  94970204     48.06 137960976    352784       164
10:30:01 AM  53796140 143791124     72.77     28548    411840  94970520     48.07 138028056    354500       180
10:40:01 AM  53724480 143862784     72.81     28536    409440  94977132     48.07 138101608    351576       204
10:50:01 AM  53632208 143955056     72.86     28500    414820  94994768     48.08 138184264    355100       340
11:00:01 AM  53558640 144028624     72.89     28484    412272  95012300     48.09 138257464    352060       192
11:10:01 AM  53450056 144137208     72.95     29000    411924  95081460     48.12 138378372    352092       140
11:20:01 AM  53353684 144233580     73.00     28856    417760  95089924     48.13 138462776    355984       196
11:30:01 AM  53275284 144311980     73.04     28680    411144  95114180     48.14 138555004    349532       168
11:40:01 AM  53180856 144406408     73.08     28644    416928  95132444     48.15 138748064    241796       192
11:50:02 AM  53104720 144482544     73.12     28644    414872  95148888     48.16 138830952    240016       188
12:00:01 PM  53038364 144548900     73.16     28436    422188  95157284     48.16 138887716    244864       260
12:10:01 PM  52940960 144646304     73.21     28968    422612  95174276     48.17 138990432    245104       188
12:20:01 PM  52839704 144747560     73.26     28952    416772  95214816     48.19 139094124    239336       224
12:30:01 PM  52741608 144845656     73.31     28780    422560  95240468     48.20 139189332    243516       184
12:40:01 PM  52652076 144935188     73.35     28684    422440  95263852     48.21 139273696    242704       156
12:50:01 PM  52580036 145007228     73.39     28560    423224  95276864     48.22 139347252    242972       172
01:00:01 PM  52470796 145116468     73.44     28544    429764  95307508     48.24 139450480    246944        52
01:10:01 PM  52386888 145200376     73.49     28916    430736  95324964     48.24 139535056    247872       172
01:20:01 PM  52288428 145298836     73.54     28740    437240  95346152     48.26 139622800    252660       172
Average:     30905961 166681303     84.36     31022    571368  95891363     48.53 160607542    374746       136
  • top shows high memory usage for collectd:
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
  10959 root      20   0   94.8g  93.7g   1084 S   0.0 49.7   1408:39 collectd
  11693 ceph      20   0 5530948   4.2g   6336 S   2.9  2.2 438:26.95 ceph-osd
  11865 ceph      20   0 5223240   3.9g   6320 S   4.3  2.1 310:21.04 ceph-osd
  11439 ceph      20   0 5061992   3.7g   6336 S   5.8  2.0 428:49.36 ceph-osd
  11837 ceph      20   0 5040588   3.7g   6344 S   3.6  2.0 399:11.14 ceph-osd
  11393 ceph      20   0 5145832   3.7g   6356 S   2.9  2.0 485:14.45 ceph-osd
  • One of the ceph-osd process was killed:
[root@overcloud-ceph-- log]# ps -elf |grep ceph-os
4 S ceph       11187       1  3  80   0 - 1077076 futex_ Feb20 ?      04:04:53 /usr/bin/ceph-osd -f --cluster ceph --id 31 --setuser ceph --setgroup ceph
4 S ceph       11188       1  3  80   0 - 1043714 futex_ Feb20 ?      03:26:18 /usr/bin/ceph-osd -f --cluster ceph --id 34 --setuser ceph --setgroup ceph
4 S ceph       11189       1  3  80   0 - 1036474 futex_ Feb20 ?      03:24:50 /usr/bin/ceph-osd -f --cluster ceph --id 24 --setuser ceph --setgroup ceph
4 S ceph       11190       1  3  80   0 - 1131482 futex_ Feb20 ?      03:22:22 /usr/bin/ceph-osd -f --cluster ceph --id 30 --setuser ceph --setgroup ceph
4 S ceph       11194       1  2  80   0 - 1036458 futex_ Feb20 ?      02:16:49 /usr/bin/ceph-osd -f --cluster ceph --id 43 --setuser ceph --setgroup ceph
4 S ceph       11198       1  2  80   0 - 1029214 futex_ Feb20 ?      02:32:22 /usr/bin/ceph-osd -f --cluster ceph --id 28 --setuser ceph --setgroup ceph
4 S ceph       11199       1  2  80   0 - 945938 futex_ Feb20 ?       02:48:27 /usr/bin/ceph-osd -f --cluster ceph --id 39 --setuser ceph --setgroup ceph
4 S ceph       11203       1  3  80   0 - 979937 futex_ Feb20 ?       03:35:00 /usr/bin/ceph-osd -f --cluster ceph --id 41 --setuser ceph --setgroup ceph
4 S ceph       11205       1  3  80   0 - 1012111 futex_ Feb20 ?      03:10:16 /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph
4 S ceph       11216       1  3  80   0 - 1082650 futex_ Feb20 ?      03:18:27 /usr/bin/ceph-osd -f --cluster ceph --id 38 --setuser ceph --setgroup ceph
4 S ceph       11222       1  6  80   0 - 1053951 futex_ Feb20 ?      06:22:22 /usr/bin/ceph-osd -f --cluster ceph --id 29 --setuser ceph --setgroup ceph
4 S ceph       11224       1  2  80   0 - 1000603 futex_ Feb20 ?      03:04:12 /usr/bin/ceph-osd -f --cluster ceph --id 26 --setuser ceph --setgroup ceph
4 S ceph       11228       1  2  80   0 - 980181 futex_ Feb20 ?       02:48:44 /usr/bin/ceph-osd -f --cluster ceph --id 44 --setuser ceph --setgroup ceph
4 S ceph       11235       1  2  80   0 - 1019063 futex_ Feb20 ?      03:07:07 /usr/bin/ceph-osd -f --cluster ceph --id 36 --setuser ceph --setgroup ceph
4 S ceph       11241       1  2  80   0 - 1032931 futex_ Feb20 ?      03:07:18 /usr/bin/ceph-osd -f --cluster ceph --id 42 --setuser ceph --setgroup ceph
4 S ceph       11245       1  4  80   0 - 1004392 futex_ Feb20 ?      04:10:48 /usr/bin/ceph-osd -f --cluster ceph --id 33 --setuser ceph --setgroup ceph
4 S ceph       11255       1  2  80   0 - 1023141 futex_ Feb20 ?      03:04:46 /usr/bin/ceph-osd -f --cluster ceph --id 25 --setuser ceph --setgroup ceph
4 S ceph       11258       1  2  80   0 - 1153967 futex_ Feb20 ?      02:38:46 /usr/bin/ceph-osd -f --cluster ceph --id 40 --setuser ceph --setgroup ceph
4 S ceph       11259       1  3  80   0 - 962888 futex_ Feb20 ?       03:37:45 /usr/bin/ceph-osd -f --cluster ceph --id 45 --setuser ceph --setgroup ceph
4 S ceph       11267       1  3  80   0 - 1093080 futex_ Feb20 ?      03:20:00 /usr/bin/ceph-osd -f --cluster ceph --id 27 --setuser ceph --setgroup ceph
4 S ceph       11269       1  4  80   0 - 962968 futex_ Feb20 ?       04:38:46 /usr/bin/ceph-osd -f --cluster ceph --id 47 --setuser ceph --setgroup ceph
4 S ceph       11274       1  3  80   0 - 978950 futex_ Feb20 ?       04:00:13 /usr/bin/ceph-osd -f --cluster ceph --id 37 --setuser ceph --setgroup ceph
4 S ceph       11277       1  3  80   0 - 1019468 futex_ Feb20 ?      03:20:33 /usr/bin/ceph-osd -f --cluster ceph --id 35 --setuser ceph --setgroup ceph
4 S ceph     2642023       1  3  80   0 - 464133 futex_ 06:41 ?       00:14:25 /usr/bin/ceph-osd -f --cluster ceph --id 46 --setuser ceph --setgroup ceph   <--------

with an error similar to this one:

Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: in thread 7f3cf93ebd80 thread_name:ceph-osd
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: ceph version 12.2.12-48.el7cp (26388d73d88602005946d4381cc5796d42904858) luminous (stable)
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 1: (()+0xa65921) [0x558eec5fe921]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 2: (()+0xf630) [0x7f3cf6aa7630]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 3: (std::__detail::_List_node_base::_M_hook(std::__detail::_List_node_base*)+0x4) [0x7f3cf63e5e54]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 4: (pg_log_t::pg_log_t(pg_log_t const&)+0x109) [0x558eec19df89]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 5: (PGLog::IndexedLog::operator=(PGLog::IndexedLog const&)+0x29) [0x558eec1a8c39]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 6: (void PGLog::read_log_and_missing<pg_missing_set<true> >(ObjectStore*, coll_t, coll_t, ghobject_t, pg_info_t const&, PGLog::IndexedLog&, pg_missing_set<true>&, bool, std::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> >&, bool, bool*, DoutPrefixProvider const*, std::set<std::string, std::less<std::string>, std::allocator<std::string> >*, bool)+0x793) [0x558eec1a9ef3]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 7: (PG::read_state(ObjectStore*, ceph::buffer::list&)+0x52b) [0x558eec14bd3b]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 8: (OSD::load_pgs()+0x9b4) [0x558eec099e74]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 9: (OSD::init()+0x2169) [0x558eec0b89b9]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 10: (main()+0x2d07) [0x558eebfba007]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 11: (__libc_start_main()+0xf5) [0x7f3cf5ab3545]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: 12: (()+0x4c0ed3) [0x558eec059ed3]
Feb 24 06:41:01 overcloud-ceph-0 ceph-osd: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Feb 24 06:41:02 overcloud-ceph-0 systemd: ceph-osd@46.service: main process exited, code=killed, status=11/SEGV
Feb 24 06:41:02 overcloud-ceph-0 systemd: Unit ceph-osd@46.service entered failed state.
Feb 24 06:41:02 overcloud-ceph-0 systemd: ceph-osd@46.service failed.
Feb 24 06:41:13 overcloud-ceph-0 collectd: dumped all
Feb 24 06:41:13 overcloud-ceph-0 kernel: reader#0[2042781]: segfault at 24 ip 00007efcad3e36ea sp 00007efc9a7fadd0 error 6 in libpython2.7.so.1.0[7efcad359000+17e000]
Feb 24 06:41:18 overcloud-ceph-0 collectd: dumped monmap epoch 1
Feb 24 06:41:23 overcloud-ceph-0 collectd: dumped all
Feb 24 06:41:23 overcloud-ceph-0 systemd: ceph-osd@46.service holdoff time over, scheduling restart.
Feb 24 06:41:23 overcloud-ceph-0 systemd: Cannot add dependency job for unit rpcbind.socket, ignoring: Unit not found.
Feb 24 06:41:23 overcloud-ceph-0 systemd: Stopped Ceph object storage daemon osd.46.
Feb 24 06:41:23 overcloud-ceph-0 systemd: Starting Ceph object storage daemon osd.46...
Feb 24 06:41:24 overcloud-ceph-0 systemd: Started Ceph object storage daemon osd.46.

Example for compute nodes affected by this

On compute nodes, this can affected active virtual machines:

var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager Traceback (most recent call last):
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7426, in update_available_resource_for_node
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     rt.update_available_resource(context, nodename)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 673, in update_available_resource
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     resources = self.driver.get_available_resource(nodename)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6457, in get_available_resource
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     data["hypervisor_version"] = self._host.get_version()
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/host.py", line 672, in get_version
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     return self.get_connection().getVersion()
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in doit
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     result = proxy_call(self._autowrap, f, *args, **kwargs)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     rv = execute(f, *args, **kwargs)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     six.reraise(c, e, tb)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     rv = meth(*args, **kwargs)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3981, in getVersion
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager     if ret == -1: raise libvirtError ('virConnectGetVersion() failed', conn=self)
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager libvirtError: cannot fork child process: Cannot allocate memory
var/log/containers/nova/nova-compute.log.8:2020-06-18 14:35:34.811 1 ERROR nova.compute.manager

The OOM killer reaps the collectd process which can be seen with a big RSS:

Jun 18 14:35:41 compute-0 kernel: Out of memory: Kill process 529856 (collectd) score 26 or sacrifice child
Jun 18 14:35:41 compute-0 kernel: Killed process 529856 (collectd), UID 0, total-vm:15272768kB, anon-rss:14423004kB, file-rss:808kB, shmem-rss:0kB
Jun 18 14:35:41 compute-0 kernel: docker-current invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Jun 18 14:35:41 compute-0 kernel: docker-current cpuset=/ mems_allowed=0-1

In the above example, collectd had total_vm 3818192, RSS 3605953 (= [14423004+808] / 4) 4K pages ; or in kB: total-vm:15272768kB, anon-rss:14423004kB, file-rss:808kB, shmem-rss:0kB

Meaning that it consumed ca. 14423004÷1024÷1024=13.7548 GB of anon-rss memory before being killed by the OOM killer.
Note: anon-rss is resident set size in memory for this process and its libraries

Environment

  • Red Hat OpenStack Platform 13.0 (RHOSP)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In