Fwd: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ...

richard.walsh at comcast.net richard.walsh at comcast.net
Thu Apr 8 14:30:31 PDT 2010





----- Forwarded Message ----- 
From: "richard walsh" <richard.walsh at comcast.net> 
To: "Craig Tierney" <Craig.Tierney at noaa.gov> 
Sent: Thursday, April 8, 2010 5:19:14 PM GMT -05:00 US/Canada Eastern 
Subject: Re: [Beowulf] QDR InfiniBand interconnect architectures ... approaches ... 



On Thursday, April 8, 2010 2:42:49 PM Craig Tierney wrote: 


>We have been telling our vendors to design a multi-level tree using 
>36 port switches that provides approximately 70% bisection bandwidth. 
>On a 448 node Nehalem cluster, this has worked well (weather, hurricane, and 
>some climate modeling). This design (15 up/21 down) allows us to 
>scale the system to 714 nodes. 


Hey Craig, 


Thanks for the information. So are you driven mostly by the need 
for incremental expandability with this design, or do you disagree 
with Greg and think that the cost is as good or better than a chassis 
based approach? What about reliability (assuming the vendor is 
putting it together for you) and maintenance headaches? Not so 
bad? What kind of cabling are you using? 


Trying to do the math on the design ... for the 448 nodes you would 
need 22 switches for the first tier (22 * 21 = 462 down). That gives 
you (15 * 22 = 330 uplinks), so you need at least 10 switches in the 
second tier (10 * 36 = 360) which leaves you some spare ports for 
other things. Am I getting this right? Could you lay out the design 
in a bit more detail? Did you consider building things from medium 
size switches (say 108 port models)? Are you paying a premium 
for incremental expandability or not? How many ports are you using 
for your file server? 


Our system is likely to come in at 192 nodes with some additional 
ports for file server connection. I would like to compare the cost 
of a 216 port switch to your 15/21 design using 36 port switches. 


Thanks much, 


rbw 





> Thanks, 
> 
> 
> rbw 
> 
> 
> ------------------------------------------------------------------------ 
> 
> _______________________________________________ 
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing 
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20100408/497f68a8/attachment.html>


More information about the Beowulf mailing list