<div dir="ltr">Only for benchmarking? We have done this for years on our production clusters (and SGI provides a tool this and more to clean up nodes). We have this in our epilogue so that we can clean out memory on our diskless nodes so there is nothing stale sitting around that can impact the next users job. <div>
<br></div><div style>Craig</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Apr 18, 2013 at 2:36 PM, Max R. Dechantsreiter <span dir="ltr"><<a href="mailto:max@performancejones.com" target="_blank">max@performancejones.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
<br>
<br>
On Thu, 18 Apr 2013, Mark Hahn wrote:<br>
<br>
>> What problems?<br>
><br>
> performance, of course. drop_caches is really only sane for benchmarking,<br>
> where you want to control for hot/cold caches.<br>
<br>
</div>Indeed.<br>
<br>
I thought you might know of harmful instances of which I was unaware.<br>
<div class="HOEnZb"><div class="h5"><br>
> otherwise, you're almost certainly better off either letting the kernel<br>
> optimize global caching, and/or fix your application<br>
> to avoid polluting the cache (O_DIRECT, madvise, etc).<br>
><br>
> chip vendors could provide a drop_caches for CPUs, too, and it would also be<br>
> "non-destructive". afaik, such instructions do exist, and are always<br>
> privileged, for basically DOS-based the same reason.<br>
><br>
> regards, mark hahn.<br>
><br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</div></div></blockquote></div><br></div>