[Beowulf] /. Cooler room or cooler servers?

Robert G. Brown rgb at phy.duke.edu
Fri Apr 8 10:07:32 PDT 2005


On Fri, 8 Apr 2005, Mark Hahn wrote:

> > Water has all sorts of nifty properties -- large heat capacity (relative
> > to air), intermediate boiling point, LARGE latent heat of vaporization
> 
> water cooling is certainly very attractive to an HPC installation - 
> even if you're not doing high-density stuff (say, >15 KW/rack),
> you've still got obscene amounts of space taken up by airflow 
> and chillers.
> 
> on the other hand, I really don't want 800 water-filled pipes in my 
> machineroom.  putting a heat-exchanger on the front and/or back of each
> rack would work (SGI does it, as well as some OTC products.)  I don't
> think much of the APC sealed-cooled-rack approach, at least not in 
> a machineroom, in part because they don't seem to understand that 
> machines are front-to-back.  Liebert's top-of-rack boosters seem very
> much like a retro-fit solution, to me.
> 
> is it possible to make a flexible heatpipe?  if there was a sealed 
> heatpipe that sucked heat off my CPUs, that would make the problem 
> much easier.  perhaps the cold end of such heatpipes could be cooled
> by a chilled-water loop (even eth-gly), which wouldn't be as bad
> if it had fewer, simpler or factory-configured connectors.

There are wet solutions out there including
ones that are wet right onto/through the CPU heatsink, coming in through
special fittings in the back:

  http://www.pyramid.de/e/produkte/server/cluster-liquid-cooling.php

(and more).  Google as always is your friend.

I think that one CAN do all of these things -- it is just "messier" in
every sense of the word and adds significantly to the overall expense
because the further you get from the beaten path, the more expensive
things are.  There are benefits; I don't know if (up front expense or
not) they are ultimately cost-benefit wins in most cases or in any
specific cases.

As you have advanced in your own arguments, my own opinion is that if
you have the space and net power/cooling capacity in the first place,
the easiest thing by far to do is to build racks or shelves of systems
at a density that can be "comfortably" carried by mainstream,
not-horribly-expensive chiller/airflow combinations that don't (maybe)
require a union plumber to be on hand every time you want to remove a
node or don't have whatever horrible set of inspections an integrated
"wet" environment might require to pass code and limit liability.

Code requirements and liability issues don't usually get much mention in
the online advertising for wet solutions and I don't know what they are.
I would >>guess<< that at the very least every single circuit would have
to be GFCI (in fact, I think that this is plain old code just about
everywhere already, not just machine rooms) and that there would have to
be environmental monitors and other cutoffs to protect people and
equipment from the effects of wet leaks and spills, especially if either
a conducting medium (water) or toxic medium (EG) were used as a coolant
in close proximity to bare wire electrical power -- in a case, for
example.

Give the hassle and cost and complicated code/liability issues, I think
wet >>might<< be a thing for folks doing tight-packed 16 KW racks to
think about, but not so good for <8 KW racks, where good airflow and
reasonable rack spacing can still do the trick pretty effectively.

Is >>ANYBODY<< on this list doing wet?  I'd like to hear about it if so,
as it is an interesting idea and it LOOKS like there are companies that
sell wet, so there must be clients, right?

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu





More information about the Beowulf mailing list