[Beowulf] 'liquid cooled' racks

Mark Hahn hahn at physics.mcmaster.ca
Mon Dec 4 12:28:38 PST 2006

>> 7.5 KW/rack isn't much; are you designing low-power nodes?
> Yup -- I guess so.  For example our current cluster nodes are dual core 
> Opteron 175s (Supermicro H8SSL-i motherboards).  They cost about $1200 for 2 
> x 2.2 GHz, with 1GB per core, and use 180 W under load (per 1U).

nice.  for loosely-coupled workloads, that's a smart design, though
I'm curious: did you consider and reject something from the Core2 world?
(it's excellent in-cache FP throughput would also match loose-coupled.)

>> APC has something similar with the cooling on the side.
> Can you give a positive or negative opinion about either the Knurr or APC 
> racks?  Have you used them yourself in a system?

I'm afraid not.  if your heart is set on this approach, my main 
suggestion is to carefully consider the capacity and quality of your CW
supply.  for instance, if your CW loop is run by campus people and is 
shared with human-space cooling, you're probably in trouble.

>> I'm pretty skeptical of the sealed-pod approach, since it seems to multiply 
>> the number of parts, create access issues, doesn't seem to actually save on 
>> space, etc.
> It does save on vertical space, and it reduces noise to the level where you 
> can work comfortably in the room.

well, I can't speak to your room's limitations, but the Knurr stuff does 
appear to consume some vertical space on its own.  I can't tell whether it's
also significantly deeper (to allow for hot and cold airhandling internale
plenums (plena?)

my experience with system noise is that it's very dependent on the 
systems themselves.  the chillers are not particularly noisy, but 
some systems are 80dB all the time; others are 65-70 when cool and 
85 when warm.

>> I would definitely consider a normal big-chillers approach _with_
>> back-of-rack CW boosters (heat exchangers).
> Can you provide a URL or recommendation for these back-of-rack CW boosters?

um, well, SGI, HP and IBM all seem to source them from somewhere, not sure.
I think it makes more sense than the liebert XDR approach (over-the-top

> (PS: though that approach still sounds noisy!)

why?  the main noise source is the fans in the server, not the chillers.
and I think the back-door coolers are normally passive.  the 
"eServer Rear Door Heat Exchanger" appears to be.

HP's thing:
is like a rack with a half-rack of fans+heat-ex next to it.  APC has a
similar half-rack slice that can be slotted anywhere (it just cools 



if you need a quiet room, well OK, but it sounds more like putting 
computers in office space, rather than a machineroom.  (I don't find 
there's much reason to be in machinerooms any more.)

I hate to sound like an ass, but I'm pretty skeptical of the vertical-space
argument as well, since you lose some vertical space inside the rack with
Knurr, and normal open-concept rooms _don't_ actually consume much vertical
space.  for instance, consider a room with three normal liebert downdraft
units, a 16" underfloor plenum and all the racks arranged in a row 
with their hot side facing the chillers.  that would work awesomely, and 
would probably work with a total of 8 ft.  I'd worry more about optimizing
the layout of the racks, inside and out (for instance, eth and power breakers 
consume 4u out of my compute racks; further, I have 11 racks of incredibly
low-dissipation quadrics switches, which should really be taken out of the 
airflow, since they're <2KW/rack.)

More information about the Beowulf mailing list