<br><div class="gmail_quote">On Tue, Feb 17, 2009 at 8:14 PM, Mike Davis <span dir="ltr"><<a href="mailto:jmdavis1@vcu.edu">jmdavis1@vcu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="Ih2E3d"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Mon, 16 Feb 2009, Tiago Marques wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I must ask, doesn't anybody on this list run like 16 cores on two nodes well, for a code and job that completes like in a week?<br>
</blockquote></blockquote></div>
For GROMACS do a google search on GROMACS parallel scaling. Switch setup and utilization plays a major role in how the code scales.</blockquote><div></div><div>Thanks, I'll look into that.</div><div>Best regards,</div>
<div> Tiago Marques</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><br>
<br>
<br>
Mike<br><font color="#888888">
<br>
-- <br>
Mike Davis Technical Director<br>
(804) 828-3885 Center for High Performance Computing<br>
<a href="mailto:jmdavis1@vcu.edu" target="_blank">jmdavis1@vcu.edu</a> Virginia Commonwealth University<br>
<br>
"Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity." George S. Patton<br>
<br>
</font></blockquote></div><br>