Racks vs. pile of PCs
mprinkey at aeolusresearch.com
Tue Aug 13 09:49:21 PDT 2002
Here are some thoughts on the value of rack vs. non-rack clusters:
Incremental cost of nodes based on the same hardware is roughly $100.
This is the incremental cost of the rackmount case and PS (~$150)
over a standard (~$50).
Packing density for desktop/minitower cases is roughly equal to 4U
case configurations. The bulk dimensions are more or less the same.
It is easy to fit most any combination of motherboards with one or
two cards and interface cards (64-bit, etc) into a desktop, 3U or
4U case. Packing starts to become a real issue with 2U cases (riser
cards give either three 32-bit or 64-bit PCI slots or one AGP and
two PCI slots.) With 1U cases, you usually only get one expansion
slot, so you have to make it count. I don't think I would trust
two Athlon/P4 CPUs in a 1U case anyway. Motherboard options in 1U
cases become further limited as DIMM/RIMM sockets may need to be
angled to fit in the case. (1.75 inches isn't much!)
My thoughts right now are that packing density of 2 CPUs / U is questionable.
I know that it has been done and is possible. Talk to me after
they have been online for 18 months. (How many crashes and fan,
hard drive, PS failures?) So, that means 2U cases with dual CPUs
might be the logical limit...or maybe 1 CPU in 1U cases, but I can't
think that makes economic sense unless you really need the memory
bandwidth. My experience with single P4s in 2U cases causes me to
hesitate at moving to duals in 2U. There is A LOT of heat generated.
So, for me, 1 CPU / U is potentially workable, but maybe not wise.
I have been looking at 3U cases as a real possibility. There is
no need for riser cards, which is a very good thing IMO. You have
full access to all expansion slots, plus there is a bit more case
real estate/thermal ballast. That is 2 CPU / 3U. That is only a
small improvement over my 1 CPU / 2 U P4-nodes, but makes the nodes
much easier to engineer.
I think that heroic cooling efforts aside, (2U) 1 CPU / U is a reasonable
limit with (3U) 0.66 CPU / U being the more approachable density.
I have not yet decided if this is enough of a gain over the desktop
(~0.5 CPU / U) to be worth the bother.
At Tuesday, 13 August 2002, "David Mathog" <mathog at mendel.bio.caltech.
>Racks sure look nice and there is no question that they
>are space efficient, but I'm really starting to wonder if
>they are such a great idea for a smallish cluster (<=20 nodes)
>in those situations where there is enough space for a
>classic pile of PCs. I mean, what other advantages do they
>have besides those two to offset their many disadvantages?
>Racks better than piles:
>1. Space efficiency.
>2. Aesthetics (racks look cool)
>Piles better than racks (these are not orthogonal):
>1. Internal space constraints
>2. CPU/motherboard Cooling. This follows from .
>3. Motherboard/CPU options. This follows from .
> With a few exceptions most motherboard/CPU combinations
> will fit into a standard ATX case - good luck getting
> a 2.4 Ghz P4 into a 1U.
>4. Initial purchase price for equivalent performance.
>5. Maintenance costs (rack parts tend to be nonstandard
> and expensive to replace, for instance, 1U power supplies).
>I estimate that for a small cluster (<1 rack's worth of equipment) with
>node guts (mobo,CPU,disk,ram) costing <= $1200 the racked version
>will cost at least 20-30% more than the piled version. So if a piled
>20 node cluster costs $24000, the equivalent racked version will
>be at least $30000. $6000 seems a lot to pay for no extra performance.
>If the "guts" were much more expensive the additional rack costs would,
>in theory be a lower percentage. In practice, it is my impression that
>the ratio is no lower because the vendors charge even more for the
>racked versions of high performance nodes.
>mathog at caltech.edu
>Manager, Sequence Analysis Facility, Biology Division, Caltech
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit http:
More information about the Beowulf