<div dir="ltr"><br><div>Wow - yeah David this sure is a doozie!<br><br>Super long shot...<br><br><a href="http://blog.jcuff.net/2015/04/of-huge-pages-and-huge-performance-hits.html">http://blog.jcuff.net/2015/04/of-huge-pages-and-huge-performance-hits.html</a><br></div><div><br></div><div>Best,</div><div><br></div><div>j.</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><br></div>--<br>dr. james cuff, assistant dean for research computing, harvard university | division of science | thirty eight oxford street, cambridge. ma. 02138 | +1 617 384 7647 | <a href="http://rc.fas.harvard.edu" target="_blank">http://rc.fas.harvard.edu</a></div></div></div>
<br><div class="gmail_quote">On Thu, Jul 9, 2015 at 2:44 PM, mathog <span dir="ltr"><<a href="mailto:mathog@caltech.edu" target="_blank">mathog@caltech.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Reran the generators and that did make the system slow again, so at least this problem can be reproduced.<br>
<br>
After those ran memory is definitely in short supply, pretty much everything is in file cache. For whatever reason, the system seems to be loathe to release memory from file cache for other uses. I think that is the problem.<br>
<br>
Here is some data, this is a bit long...<br>
<br>
numactl --hardware ho<br>
available: 2 nodes (0-1)<br>
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46<br>
node 0 size: 262098 MB<br>
node 0 free: 18372 MB<br>
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47<br>
node 1 size: 262144 MB<br>
node 1 free: 2829 MB<br>
node distances:<br>
node 0 1<br>
0: 10 20<br>
1: 20 10<br>
<br>
CPU specific tests were done on 20, so NUMA node 0. None of the tests come close to using up all the physical memory in a "node", which is 262GB.<br>
<br>
When cache has been cleared, and the test programs run fast:<br>
cat /proc/meminfo | head -11<br>
MemTotal: 529231456 kB<br>
MemFree: 525988868 kB<br>
Buffers: 5428 kB<br>
Cached: 46544 kB<br>
SwapCached: 556 kB<br>
Active: 62220 kB<br>
Inactive: 121316 kB<br>
Active(anon): 26596 kB<br>
Inactive(anon): 109456 kB<br>
Active(file): 35624 kB<br>
Inactive(file): 11860 kB<br>
<br>
run one test and it jumps up to<br>
<br>
MemTotal: 529231456 kB<br>
MemFree: 491812500 kB<br>
Buffers: 10644 kB<br>
Cached: 34139976 kB<br>
SwapCached: 556 kB<br>
Active: 34152592 kB<br>
Inactive: 130400 kB<br>
Active(anon): 27560 kB<br>
Inactive(anon): 109316 kB<br>
Active(file): 34125032 kB<br>
Inactive(file): 21084 kB<br>
<br>
and the next test is still quick. After running the generators, but when nothing much is running, it starts like this:<br>
<br>
cat /proc/meminfo | head -11<br>
MemTotal: 529231456 kB<br>
MemFree: 19606616 kB<br>
Buffers: 46704 kB<br>
Cached: 493107268 kB<br>
SwapCached: 556 kB<br>
Active: 34229020 kB<br>
Inactive: 459056372 kB<br>
Active(anon): 712 kB<br>
Inactive(anon): 135508 kB<br>
Active(file): 34228308 kB<br>
Inactive(file): 458920864 kB<br>
<br>
Then when a test job is run it drops quickly to this and sticks. Note the MemFree value. I think this is where the "Events/20" process kicks in:<br>
<br>
cat /proc/meminfo | head -11<br>
MemTotal: 529231456 kB<br>
MemFree: 691740 kB<br>
Buffers: 46768 kB<br>
Cached: 493056968 kB<br>
SwapCached: 556 kB<br>
Active: 53164328 kB<br>
Inactive: 459006232 kB<br>
Active(anon): 18936048 kB<br>
Inactive(anon): 135608 kB<br>
Active(file): 34228280 kB<br>
Inactive(file): 458870624 kB<br>
<br>
Kill the process and the system "recovers" to the preceding memory configuration in a few seconds. Similarly /proc/zoneinfo values from before the generators were run, when the system was fast:<br>
<br>
extract -in state_zoneinfo_fast3.txt -if '^Node' -ifn 10 -ifonly<br>
Node 0, zone DMA<br>
pages free 3931<br>
min 0<br>
low 0<br>
high 0<br>
scanned 0<br>
spanned 4095<br>
present 3832<br>
nr_free_pages 3931<br>
nr_inactive_anon 0<br>
nr_active_anon 0<br>
Node 0, zone DMA32<br>
pages free 105973<br>
min 139<br>
low 173<br>
high 208<br>
scanned 0<br>
spanned 1044480<br>
present 822056<br>
nr_free_pages 105973<br>
nr_inactive_anon 0<br>
nr_active_anon 0<br>
Node 0, zone Normal<br>
pages free 50199731<br>
min 11122<br>
low 13902<br>
high 16683<br>
scanned 0<br>
spanned 66256896<br>
present 65351040<br>
nr_free_pages 50199731<br>
nr_inactive_anon 16490<br>
nr_active_anon 7191<br>
Node 1, zone Normal<br>
pages free 57596396<br>
min 11265<br>
low 14081<br>
high 16897<br>
scanned 0<br>
spanned 67108864<br>
present 66191360<br>
nr_free_pages 57596396<br>
nr_inactive_anon 10839<br>
nr_active_anon 1772<br>
<br>
and after the generators were run (slow):<br>
<br>
Node 0, zone DMA<br>
pages free 3931<br>
min 0<br>
low 0<br>
high 0<br>
scanned 0<br>
spanned 4095<br>
present 3832<br>
nr_free_pages 3931<br>
nr_inactive_anon 0<br>
nr_active_anon 0<br>
Node 0, zone DMA32<br>
pages free 105973<br>
min 139<br>
low 173<br>
high 208<br>
scanned 0<br>
spanned 1044480<br>
present 822056<br>
nr_free_pages 105973<br>
nr_inactive_anon 0<br>
nr_active_anon 0<br>
Node 0, zone Normal<br>
pages free 23045<br>
min 11122<br>
low 13902<br>
high 16683<br>
scanned 0<br>
spanned 66256896<br>
present 65351040<br>
nr_free_pages 23045<br>
nr_inactive_anon 16486<br>
nr_active_anon 5839<br>
Node 1, zone Normal<br>
pages free 33726<br>
min 11265<br>
low 14081<br>
high 16897<br>
scanned 0<br>
spanned 67108864<br>
present 66191360<br>
nr_free_pages 33726<br>
nr_inactive_anon 10836<br>
nr_active_anon 1065<br>
<br>
Looking the same way at /proc/zoninfo while a test is running showed<br>
the "pages free" and "nr_free_pages" values oscillating downward to<br>
a low of about 28000 for Node 0, zone Normal. The rest of the values were essentially stable.<br>
<br>
Looking the same way at /proc/meminfo while a test is running gave values that differed in only minor ways from the "after" table shown above. MemFree varied in a range from abut 680000 to 720000.<br>
Cached dropped to ~482407184 kB and then budged barely at all.<br>
<br>
Finally the last few lines from "sar -B" (sorry about the wrap)<br>
<br>
10:30:03 AM 5810.55 301475.26 95.99 0.05 51710.29 48086.79 0.00 48084.94 100.00<br>
10:40:01 AM 3404.90 185502.87 96.67 0.01 47267.84 44816.30 0.00 44816.30 100.00<br>
10:50:02 AM 9.13 13.32 192.24 0.11 4592.56 48.54 3149.01 3197.55 100.00<br>
11:00:01 AM 191.78 9.97 347.56 0.13 16760.51 0.00 3683.21 3683.21 100.00<br>
<br>
11:00:01 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff<br>
11:10:01 AM 11.64 7.75 342.59 0.09 18528.24 0.00 1699.66 1699.66 100.00<br>
11:20:01 AM 0.00 6.75 96.87 0.00 43.97 0.00 0.00 0.00 0.00<br>
<br>
The generators finished at 10:35. The data point at 10:30 (while they were running) pgscank/s and pgsteal/s jumped up from 0 to high values. When later tests were run the former fell down to not much but the latter stayed high. Additionally when the test runs were made following the generator it pushed pgscand/s from 0 to several thousand per second. The last row consists of a 10 minute span where no tests were run, and these values all dropped back to zero.<br>
<br>
Since excessive file cache seems to implicated did this:<br>
echo 3 > /proc/sys/vm/drop_caches<br>
<br>
and reran the test on node 20. It was fast.<br>
<br>
I guess the question now is what parameter(s) control(s) the conversion from memory in file cache to memory needed for other purposes when free memory is in short supply and there is substantial demand. It seems the OS isn't releasing cache. Or maybe it isn't flushing it to disk. I don't think it's the latter because iotop and iostat don't show any activity during a "slow" read.<br>
<br>
Thank,<div class="HOEnZb"><div class="h5"><br>
<br>
David Mathog<br>
<a href="mailto:mathog@caltech.edu" target="_blank">mathog@caltech.edu</a><br>
Manager, Sequence Analysis Facility, Biology Division, Caltech<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</div></div></blockquote></div><br></div>