[dsp-clusters] Question
Eugene Leitl
eugene.leitl at lrz.uni-muenchen.de
Wed Aug 16 23:52:13 PDT 2000
Cody Lutsch writes:
> A mix match of PCs is just out of the question for large Beowulf projects,
> period. I can imagine the nightmare of managing a bunch of PCs lying on the
> floor. When you move to a racked environment; all hubs, switches, KVMs,
Lying on the floor? I thought it was clear from the URLs I provided
that one shelves off-shelf (sorry) commodity (whether platinum-plated
or not, depends on how your codes scale in respect to parallelism) PCs
on cheap heavy-duty shelves as available from CostCo et al. The Stone
Soupercomputer is clearly not a standard way of doing things, machines
in a cluster are typically homogenous.
And I don't get the "nightmare" bit. If you don't need physical access
to the machine, managing is exactly the same. If you do, plucking a
desktop off the shelf, and setting a spare in its place is, if
anything, easier than unscrewing the gliding rails from a 19"
rackmount. If you want to transport a 19" rackmounted cluster, you
have to disassemble it anyway, as it is much too heavy for tipping
safely. The only advantage is footprint, as you can pack more than 64
nodes into a single full height 19".
> APCs, routers, servers, you name it, are available in a 19" flavor.... all
> fitting in the same enclosure.
While I don't have anything against rackmount switches (since they
don't come at an extra cost), I'd probably go with a distributed UPS
solution (one $100 UPS for each node), or use a dieser generator with
a crossover circuit instead of an UPS for larger installations. Also,
minimizing the length of cabling might want you to put the switches
where the nodes are.
> > Well, if it was my grant money, I'd build the thing with my students,
> > because that way I'd have 3-4 times as many nodes for the same
> > money.
>
> It no where near triples or quadruples the cost per node. If you are using
> high end systems, the percent added due to Rackmount hardware becomes
> increasingly small. Lets use $200 as the cost of a 19" Rackmount case (some
> are a lot more, some are a bit less). Lets also say a typical desktop type
> case is free (they are not free, but it makes the math easier). Now lets
If you order a set of identical machines from a Taiwan company, or
compare individual component prices, you'll notice that they don't add
up. All I know that a COTS Taiwan-made desktop PC costs a lot less
than the same hardware packed in a rackmount, because of economies of
scale. It's both about the price of the rackmount case, as it is about
having to switch the assembly line (COTS warez are being knocked out
in 10-100 k quantities).
> put in $2000 of hardware. The Rackmount comes to $2,200/node where the
> 'mix-and-match' comes to $2000/node. This is a very small increase as far
> as percents go. This dollar investment is quickly made up by the level of
$2000/node? That's a lot, according to
http://www.pricewatch.com/1/234/2187-1.htm
I can get an Athlon 700 (admittedly, with only 32 MBytes RAM, but RAM
is cheap nowadays) for $511. While in a real world case I'll be
probably looking more at $1000, there's some definitive difference
between $1000 and $1200 (plus the fractional cost of a 19", which can
be up to $1000/nodes per rack).
Typically, the more nodes you use, the more the "small" differences
start to aggregate. $200 price difference per node results in $20000
price bill if we're talking 100 nodes.
> organization and standardization provided. Look at a rack full of servers,
> with a single monitor, keyboard/mouse tray, KVM to switch between them, and
> a UPS, and tell me that's not a better way of doing things.
I'd rather have a single (2-3 for redundancy in case of large ones)
console for the entire cluster, thank you. As you noticed, shelved PCs
in the links I mentioned do have a single console for the whole
cluster. It has nothing to do with rackmount or not rackmount.
> If you are collocating the system(s) somewhere, the SAVINGS of a high
> density solution (1U servers, or a '3-in-1' 2U) is unquestionable. Look at
> the price of colo's recently? By spending a few more bucks to get a higher
> density solution, you save the cost of renting another rack/cage/room.
In academic settings, space is usually not a problem. As to density,
I'm not sure a 4U rackmount case can beat an industrial strength
shelf. 2U or 1U might be another matter, but a room in a university is
not a climatized ISP room with false floors and a set of admins. The
costs for administration and location (plus power) do not usually
figure in an academic setting.
> > Notice that these are big installations. Anyone knows how Google
> > houses their 6 kBoxes?
>
> Actually, I don't know how Google house their machines... I would be
> interested to see their setup... 6,000? Wow.
I've tried hawking for some photos on http://google.com , but very
little meat there. Whoever can find some actual photos, please post
them here.
> Right now we are building a 1200 node setup for a company, I'm glad I'm not
> the one that has to draw out the networking plan for that! :)
Wow. FastEthernet (channel bonded), or Myrinet? Something else
entirely?
> Thanks for the links, very informative.
>
> Cody
You're very welcome. I forwarded your last mail to the Beowulf list,
to see what the folks there might want to say on housing issues.
Those of you interested in the issue might want to peruse
http://www.supercomputer.org/cgi-bin/search.cgi?ul=&ps=20&np=0&q=rack
I've thought a bit about casing DSP clusters, both with only on-chip
and off-chip memory, but I'd like to hear your ideas on the
matter. Particularly, SHARC-type 6-link DSPs would require short
interconnects, unlike to long-haul networking technologies like GBit
Ethernet.
More information about the Beowulf
mailing list