[Beowulf] Reasonable upper limit in kW per rack for air cooling?
james.p.lux at jpl.nasa.gov
Sun Feb 13 16:06:06 PST 2005
----- Original Message -----
From: "David Mathog" <mathog at mendel.bio.caltech.edu>
To: <beowulf at beowulf.org>
Sent: Sunday, February 13, 2005 1:50 PM
Subject: [Beowulf] Reasonable upper limit in kW per rack for air cooling?
> There are a series of white papers by APC here:
> where they discuss various power and cooling factors. They note
> a disconnect between the higher densities achieved by blades and
> similar high density racks and the practicality of actually
> cooling these beasts. Basically it comes down to you save space
> on the rack and then give it all back on the cooling system. Think
> of it minimally in these terms - to move enough cfm at less than 30
> feet per minute starts to require a duct larger than the rack itself!
I think that's 30 ft/second.. 1800 lfpm would be a reasonable duct speed...
30 lfpm is really really slow (that's 1/2 ft/sec, which is a pretty darn
> In terms of TCO, at the moment, APC rejects the notion that
> these ultra high density machines are cost effective because they
> are so very difficult to cool.
> It seems to me that at a certain power point the racks are going to
> have to resort to water cooling. Long ago the ECL mainframes were
> cooled this way, but it's been a long time since most of us have
> seen water pipes running into the computers in a machine room.
High power density devices (like power electronics or high power vacuum
tubes) have always resorted to liquid cooling. It's so much more efficient
than trying to cool with air. For a variety of reasons, but primarily
because it separates the problem of physical device and radiator surface.
Consider liquid vs air cooled internal combustion engines. Really high
power density often uses some sort of phase change (ebullient) cooling,
although the design challenges are significant. Even some laptops have used
liquid or phase change cooling (heat pipes) to move the heat from the CPU to
the case. An interesting exception to liquid cooling for high power devices
is big generators, which are cooled with hydrogen gas (low viscosity and
density, so low aerodynamic drag)
But liquid cooling, per se, isn't a crippling thing to work with. And, it
actually allows certain design economies: no more do you have to constrain
the design for air flow, or conduction through the boards, nor do you have
to fool with an array of CPU fans, video card fans, etc.
> Cooling a 10 kW rack well looks to be extremely tough with air,
> and going much above that would seem to require something approaching
> a dedicated wind tunnel. Any opinions on how high the power
> dissipation in racks will go before the manufacturers throw
> in the air cooling towel and start shipping them with water
Consider that 10kW is 5-10 times the power dissipation of a hair dryer.
Other solutions that might turn up are an internal cooling loop to move
heat from inside to a big heatsink on the surface. Modern rack mounted PCs
aren't particularly designed for efficient thermal transfer with minimal air
flow. (there's no economic incentive for it)
There are economies of scale to a common chiller, though, because when you
get to large HVAC, cold water is what you get, rather than cold air, because
moving cold air is a LOT more expensive than moving cold water.
> If you were designing a computer room today (which I am) what would
> you allow for the maximum power dissipation per rack _to_be_handled_
> by_the_room_A/C. The assumption being that in 8 years if somebody
> buys a 40kW (heaven forbid) rack it will dump its heat through
> a separate water cooling system.
There are such things as individual rack chillers, which you would bolt to a
rack and then hook up to a centralized cold water source.
> David Mathog
> mathog at caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf