Advice for 2nd cluster installation

Mike Eggleston mikee at
Fri Jan 10 10:59:05 PST 2003

On Fri, 10 Jan 2003, Robert G. Brown wrote:

> The issue here is more one of whether or not space is "expensive" to
> you.  If not, blade solutions are going to be relatively more expensive
> per flop and relatively less powerful in terms of networking and storage
> options and configurations beyond a single/mere latency question.  In
> order of cost per raw aggregate GFLOP (by whatever measure you like) it
> runs blade, rackmount, tower/shelving.  In terms of configurability and
> available node options, the order is reversed -- access to the full bus
> and maximal disk and cooling options in a tower, usually a riser subset
> of 1-3 slots in a rackmount chassis, and quite possibly no bus at all or
> a single expansion option in a blade design.
> Consequently, one usually chooses a cluster configuration based at least
> in part on the "cost" of space to you.  If you have a gym-sized room,
> mostly empty and nicely climate controlled, and only plan to EVER own
> maybe 64 to 128 nodes at any one time, heck, save the money, buy more
> nodes, and use tower chassis and heavy duty shelving from Home Depot.
> If you have a decent sized server room (maybe five to ten meters square)
> you're more likely to need to go with rackmounts to keep things neat and
> clean and provide room to work and for additional systems as time
> passes.  If you have a glorified broom closet with a window unit for an
> air conditioner, a blade system suddenly looks very attractive.
> There can of course be alternative ways costs and benefits can be
> locally tallied to push toward one or the other configuration; this is
> just intended to be illustrative.  The point is, do a cost-benefit
> analysis, being fairly honest about costs and benefits in your local
> environment, and be properly skeptical about vendor's or integrator's
> claims for the same.  Whatever you do should make sense, and not just be
> done because that was the way you thought everybody did it.

I'm part of a startup group that will be using a larger cluster soon. I
currently have a very ad-hoc cluster made of old machines I could
scrounge for the task. What I want to do, mostly because of the money
involved, is buying motherboards from, adding cpu, fan, power
supply, and memory, putting the whole thing in some sort of rack or
enclosure to keep it neat, and hook it to a switch. Given this low-tech
idea does anyone have any ideas or suggestions on how to rack/house
these boards? Once installed I will also redirect an A/C plenum(?)/duct
to directly above the rack. Space is not so much at a premium as
simple capital at this point.


More information about the Beowulf mailing list