<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 12pt; color: #000000'><br><br><br><br>>----- Original Message -----<br>>From: "Greg Lindahl" <lindahl@pbm.com><br>><br>>On Mon, May 11, 2009 at 02:30:31PM -0400, Mark Hahn wrote:<br>><br>>> 80 is fairly high, and generally requires a high-bw, low-lat net.<br>>> gigabit, for instance, is normally noticably lower, often not much <br>>> better than 50%. but yes, top500 linpack is basically just<br>>> interconnect factor * peak, and so unlike real programs...<br>><br>>Don't forget that it depends significantly on memory size.<br><br><div> ... and interconnect. Take a look at the top500 and note that</div><div>GigE interconnects tend to deliver a lower percentage of peak</div><div>when running Linpack.</div><div><br></div><div>As suggested to model a Linpack number for your cluster quickly,</div><div>you should compute peak performance, then go to the top500 </div><div>list and find a system with your processors and interconnect type.</div><div>Note the percentage of peak Linpack reported for that system and</div><div>use it to generate an estimated Linpack peak for your cluster.</div><div><br></div><div>Later, when you have time to install and tune Linpack for your</div><div>machine you can see how close your estimate was. It should </div><div>not be more than 2 to 4% off.</div><div><br></div><div>Regards,</div><div><br></div><div>rbw</div><div><br><br><br>_______________________________________________<br>Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing<br>To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf<br></div></div></body></html>