space requirements

Judd Tracy jtracy at ist.ucf.edu
Fri Mar 28 19:13:30 PST 2003


----- Original Message -----
From: "Robert G. Brown" <rgb at phy.duke.edu>
To: <jbbernard at eng.uab.edu>
Cc: <Beowulf at beowulf.org>
Sent: Friday, March 28, 2003 3:07 PM
Subject: Re: space requirements


> For 100BT ports alone for 1000 nodes you'll likely need a full 45U rack,
> twice that if you use patch panels, guestimating something like 24 ports
> per U (although there may well be switches with higher per U density).
> OTOH, if you have a lot of higher speed ports, they may have lower port
> density.  If you have both 100BT and myrinet switches, you'll have to
> accomodate both.
>
> You also have to worry a LOT about how you're going to partition the
> switches -- I don't think there are any switches with 1000 ports with
> full bisection bandwidth at any price, and the bigger the full-BB
> switches that you DO select, with uplinks of one sort or another to
> connect the switches, the more expensive.  Unless, of course, you choose
> another network (Myrinet, SCI) as the IPC channel or only will be doing
> embarrassingly parallel tasks and don't care about the 100BT network.

Just to let you know Extreme Networks has their BlackDiamond 6818 that will
support upto 1440 100BT ports with 640Bbps non-blocking backplane all in 35U
(I think 61.25in).  I am sure that Foundry has something similar too.

Judd Tracy
Assistant in Simulation
Institute for Simulation and Training
University of Central Florida
jtracy at ist.ucf.edu





More information about the Beowulf mailing list