[Beowulf] 10GbE topologies for small-ish clusters?

Gilad Shainer Shainer at Mellanox.com
Wed Oct 12 11:11:04 PDT 2011


The 48-ports are not Mellanox but previous company that Mellanox acquired, as the Mellanox ones are 36 x 40G or 64 x 10G in 1U (or bigger). But please don't let these small details hold you from re-living your history.

Good luck selling.

-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Greg Lindahl
Sent: Wednesday, October 12, 2011 11:05 AM
To: Chris Dagdigian
Cc: Beowulf Mailing List
Subject: Re: [Beowulf] 10GbE topologies for small-ish clusters?

We just bought a couple of 64-port 10g switches from Blade, for the middle of our networking infrastructure. They were the winner over all the others, lowest price and appropriate features. We also bought Blade top-of-rack switches. Now that they've been bought up by IBM you have to negotiate harder to get that low price, but you can still get it by threatening them with competing quotes.

Gnodal looks very interesting for larger, multi-switch clusters, they were just a bit late to market for us. Arista really believes that their high prices are justified; we didn't.

And if anyone would like to buy some used Mellanox 48-port 10ge switches, we have 2 extras we'd like to sell.

-- greg

On Wed, Oct 12, 2011 at 10:52:13AM -0400, Chris Dagdigian wrote:
> 
> First time I'm seriously pondering bringing 10GbE straight to compute 
> nodes ...
> 
> For 64 servers (32 to a cabinet) and an HPC system that spans two 
> racks what would be the common 10 Gig networking topology be today?
> 
> - One large core switch?
> - 48 port top-of-rack switches with trunking?
> - Something else?
> 
> Regards,
> Chris
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list