[Beowulf] confused about high values of "used" memory under "top" even without running jobs

Rahul Nabar rpnabar at gmail.com
Wed Aug 12 11:55:45 PDT 2009


I am a bit confused about the high "used" memory that top is showing on one
of my machines? Is this "leaky" memory caused by codes that did not return
all their memory? Can I identify who is hogging the memory? Any other ways
to "release" this memory?

I can see no user processes really (even the load average is close to
zero), but yet 7 GB out of our total of 16GB seems to be used.

################################################
top - 13:45:00 up 4 days, 20:07,  2 users,  load average: 0.00, 0.00, 0.00
Tasks: 146 total,   1 running, 145 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,
0.0%st
Mem:  16508824k total,  7148804k used,  9360020k free,   307040k buffers
Swap:  8385920k total,        0k used,  8385920k free,  6380236k cached
###########################################################

On the other hand, I recall reading somewhere before that due to the
paging mechanism
Linux is also supposed to start using as much memory as you give it? Just
confused if this is something I need to worry about or not.

Incidentally the way I discovered this was because users reported that their
codes were running ~30% faster right after a machine reboot as opposed to
after a few days running.  Do people do anything special to make sure
that in a scheduler based environment (say PBS) the last job releases
all its memory resources before the new one starts running?

[apologize for the multi-posting ; I first posted this on a generic
linux list but then thought that HPC guys might be more sensitive and
concerned about such memory issues]
-- 
Rahul



More information about the Beowulf mailing list