<a href="http://www.penguincomputing.com/cluster_computing">http://www.penguincomputing.com/cluster_computing</a><br><br>Can the above be of any help to you ?<br><br>Regards<br>Prajeev<br><br><div class="gmail_quote">On Fri, Mar 27, 2009 at 11:16 AM, Dow Hurst DPHURST <span dir="ltr"><<a href="mailto:DPHURST@uncg.edu">DPHURST@uncg.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><font face="Default Sans Serif,Verdana,Arial,Helvetica,sans-serif" size="2"><div><div><font color="#990099"></font><blockquote style="border-left: 2px solid rgb(0, 0, 0); padding-right: 0px; padding-left: 5px; margin-left: 5px; margin-right: 0px;">
To: <a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a><br>From: Greg Lindahl <<a href="mailto:lindahl@pbm.com" target="_blank">lindahl@pbm.com</a>><br>Sent by: <a href="mailto:beowulf-bounces@beowulf.org" target="_blank">beowulf-bounces@beowulf.org</a><br>
Date: 03/27/2009 12:03AM<br>Subject: Re: [Beowulf] Lowered latency with multi-rail IB?<br><br><font face="Courier New,Courier,monospace" size="3">On Thu, Mar 26, 2009 at 11:32:23PM -0400, Dow Hurst DPHURST wrote:<br><br>
> We've got a couple of weeks max to finalize spec'ing a new cluster. Has <br>
> anyone knowledge of lowering latency for NAMD by implementing a <br>> multi-rail IB solution using MVAPICH or Intel's MPI?<br><br>Multi-rail is likely to increase latency.<br><br>BTW, Intel MPI usually has higher latency than other MPI<br>
implementations.<br><br>If you look around for benchmarks you'll find that QLogic InfiniPath<br>does quite well on NAMD and friends, compared to that other brand of<br>InfiniBand adaptor. For example, at<br><br><a href="http://www.ks.uiuc.edu/Research/namd/performance.html" target="_blank">http://www.ks.uiuc.edu/Research/namd/performance.html</a><br>
<br>the lowest line == best performance is InfiniPath. Those results<br>aren't the most recent, but I'd bet that the current generation of<br>adaptors has the same situation.<br><br>-- Greg<br>(yeah, I used to work for QLogic.)<br>
<br>_______________________________________________<br>Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</font>
</blockquote><br>I'm very familiar with that benchmark page. ;-)<br><br>One motivation for designing a MPI layer to lower latency with multi-rail is when making use of accelerator cards or GPUs. There is so much more work being done that the interconnect quickly becomes the limiting factor. One Tesla GPU is equal to 12 cores for the current implementation of NAMD/CUDA so the scaling efficiency really suffers. I'd like to see how someone could scale efficiently beyond 16 IB connections with only two GPUs per IB connection when running NAMD/CUDA.<br>
<br>Some codes are sped up far beyond 12x and reach 100x such as VMD's cionize utility. I don't think that particular code requires parallelization (not sure). However, as NAMD/CUDA is tuned, the efficiency on the GPU is increased, and new bottlenecks found and fixed from previously ignored sections of code, there will be even more than a 12x speedup. So, a solution to the interconnect bottleneck needs to be developed and I wondered if multi-rail would be the answer. Thanks so much for your thoughts!<br>
Best wishes,<br><font color="#888888">Dow<br></font></div></div></font><br>_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
<br></blockquote></div><br>