Fw: Re: [Beowulf] cooling question: cfm per rack?
Robert G. Brown
rgb at phy.duke.edu
Sat Feb 12 06:36:17 PST 2005
On Fri, 11 Feb 2005, David Mathog wrote:
> Sorry, to be vague, there are just so many unknowns.
Always.:-)
>
> I also talked to Darryl Willick, who runs a bunch of machine rooms
> on campus for Chemistry and some of Rees, Bjorkman and Mayo's
> stuff. His main room is about at capacity now with
> 6 full racks and a few odds and ends. He has 2 x 250A panels
> in there and apparently only a 45kW A/C unit. That second
> number is really odd because they aren't usually rated that
> way, but that's the number he remembered. If he's right that's
> 45000/3500=12 tons, roughly the same as the unit currently
> in the Rees area. He said his had to be serviced
> recently because they were having overheating problems, but only
> a belt was changed. Unknown how many cfm it is. He has a small
> workstation area that is somehow or other connected to his machine
> room ventilation wise, and apparently when they prop the door open
> in the workstation area it causes problems in the machine room.
> So maybe it would make sense to put a small separate A/C unit
> in the proposed classroom to avoid those sorts of complications
> in the future. Or maybe it can tap off building air.
>
> Darryl did say something interesting though, he said that for
> some units the A/C people can increase the capacity by changing
> the pulleys around. Apparently this blows more air, and the
> cold water isn't limiting, so it effectively upgrades the unit
> without changing very much. Darryl said that this was done
> at some point for Mayo's computer room in the subbasement
> of the BI.
I'm sure you probably remember this from my posts on this topic before,
but there are lots of bad experiences we and others on the list have had
with AC that you can profit from. Don't forget things like:
* Kill switch for room for the day the AC fails altogether at 2:30
a.m.
* Automated monitoring and (if you've got one) a call cycle so that
maybe somebody can get there in time to shut things down before the kill
switch kicks in EVEN at 2:30 a.m.
* The fact that at many places, the physical plant people have this
annoying tendency to try to save energy by throttling down the A/C to a
standby mode (where the chilled water is allowed to warm up to maybe
18C) in the winter because hey, it's cold outside, right? Often this is
done automatically, without human thought or control. Often this
triggers events for which the first two interventions are required when
it does. This may not apply to you in your generally warm clime
(compared to here, anyway) but is worth checking, for sure.
* When computing the cost/benefit of power vs AC, be aware (to put
into words what you're working toward anyway) that the true optimum is
going to be biased towards an excess of AC capacity. This is for
several reasons, once you think about it. The most important one is
that adding new/additional power is relatively cheap whenever you do it;
adding new/additional AC capacity later can be VERY expensive -- as
expensive as adding AC at all in the first place.
* Surplus capacity can also keep room ambient colder (generally
better) while operating in the normal load range and may be cheaper in
terms of operating efficiency, as AC COP depends on temperature
differentials between delivery and returned chiller water (although the
blowers and pumps draw too -- don't know how this all works out in the
wash).
* Redundancy is good, if you've got the space. If one blower out of
three goes, the remaining two may be able to keep the space operational
while service is performed, or at least keep it cool enough to avoid an
involuntary kill or midnight call.
* As you note -- it really helps to get professional advice on this
from an engineer or architect who specializes in server room
infrastructure design and support. Not that you shouldn't educate
yourself in it too -- it's just that they SHOULD have a broad base of
personal professional experience to draw on as well as some classroom
education on the issues to be faced. Worth paying for.
As you note, it is very difficult to know exactly where future power
requirements and node densities will go per rack. Maybe blades will
take over the universe, and racks will suddenly become very hot indeed.
Some non-blade racks can achieve close to double the standard node/CPU
densities in terms of floorspace footprint (e.g. Rackable, IIRC).
Multiple core CPUs are at the threshold of appearing, and although they
also look like they might be power/clock limited BECAUSE of the heat
problem, there is still going to be some sort of scaling of power per
compute capacity per cubic foot of rack space as the latter goes up.
Alternatively, some room designs might install the DUCTWORK now that can
support a (say) doubling of future AC capacity in the future and reserve
space for the local units to drive this capacity in the facility but
leave that space empty. Then you can (eventually) add the units without
having to necessarily rip everything apart. This probably works best
with raised floor designs (where you just duct per rack location) but
one would expect that they could manage it for other kinds of ducted
delivery and return if they try.
In any infrastructure project, it really pays to think about this stuff
ahead of time, as you are.
rgb
>
> Regards,
>
> David Mathog
> mathog at caltech.edu
> Manager, Sequence Analysis Facility, Biology Division, Caltech
>
> ------------- Forwarded message follows -------------
>
> At 08:17 AM 2/11/2005, you wrote:
> >In designing a computer room two key factors are:
> >
> >1. Power in (electricity)
> >2. Power out (A/C)
> >
> >The second term really has two parts:
> >
> > A. the amount of air moved
> > B. the reduction in temperature of that air across the A/C unit
> >
> >The latter part is specified in tons. The A/C guys I've spoken
> >with recently utilize some more or less standard relationship
> >between cubic feet per minute (cfm) and A/C tons for the units they
> >maintain. These run off the campus cold water supply, so
> >it makes sense that heat out is proportional to flow across, assuming
> >that the cold water has a very large heat capacity.
> >
> >However, in terms of cooling the units themselves, the amount of
> >air flow through the racks is also important. That flow is
> >also in cfm. Ideally cfm through the racks would be equal to cfm
> >through the A/C, ie, all air goes once through the racks and then
> >directly through the A/C. Even more ideally cfm through _each_ rack
> >could be modulated somehow, since some racks move much more
> >air than others and putting a low flow rack next to a high flow rack
> >might drive the air the wrong way through the low flow unit.
> >
> >How does one calculate an optimal cfm through a rack?
>
> Decide on a maximum outlet temperature (say, 30C)
> Find your inlet air temperature (say, 15C)
> You know your dissipation.. (say, 5kW)
>
> Calculate how much air you need to move using the specific heat of air.
> (about 1 kJ/(kg K))
>
> 5 kJ/sec means you'd need 5 kg/sec for a 1 degree rise, but here, with a 15
> degree rise, you can get by with .33 kg/sec. Turn the kg/sec into cfm...
> .33 kg * 1.3 m3/kg = .43
> cubic meters/sec. There's about 35 cubic feet in a cubic meter, so we need
> about 15 cubic feet per second. Multiply by 60 and you get a bit more than
> 900 cfm.
>
> Now.. that's idealized, so double it. 1800 cfm or so.
>
>
> Step 2: How big is the duct? Generally, you don't want to go any faster
> than 1000 linear feet per minute, so your duct will need to be about 2
> square feet. (you begin to see why you don't want some little 6" diameter
> blower...)
>
>
>
> >For a specific example with round numbers, let's say it's a
> >25U rack, dissipates 10kW, and has a single 50 cfm per minute output
> >fan per 1U node. (Ie, all air out must go through that path.)
> >
> >There seem to be a bunch of variables that are hard to deal with.
> >For instance, adding the exhaust fans would be 50*25 = 1250 cfm.
> >Is that all there is to it? But that type of fan only runs at
> >the stated flow rate if the pressures are exactly as specified.
> >Without incredibly careful balancing of the pressure across the
> >rack it won't generally run at 50 cfm.
>
>
> This is precisely the case. And, of course, the actual circumstances will
> be nothing like what the design specs are.
>
>
> >Is cfm the key unit here or should one think in terms of pressure
> >at various points in the room?
>
> Trying to come up with an accurate aerodynamic model is a worthy challenge
> for a very large cluster (computational challenge, not thermal).
>
> It's all done by rules of thumb and adding lots of margin.
>
> Use the rough sizing technique to get an approximate air flow. Use
> reasonable sized ducts and air speeds. Measure the actual outlet
> temperatures.
>
> Actually, what most people do is a rough sizing, then call in someone who
> actually does this for a living (a HVAC contractor) and use their rough
> sizing to validate what the contractor tells you you should have.
>
>
>
> >Thanks,
> >
> >David Mathog
> >mathog at caltech.edu
> >Manager, Sequence Analysis Facility, Biology Division, Caltech
> >_______________________________________________
> >Beowulf mailing list, Beowulf at beowulf.org
> >To change your subscription (digest mode or unsubscribe) visit
> >http://www.beowulf.org/mailman/listinfo/beowulf
>
> James Lux, P.E.
> Spacecraft Radio Frequency Subsystems Group
> Flight Communications Systems Section
> Jet Propulsion Laboratory, Mail Stop 161-213
> 4800 Oak Grove Drive
> Pasadena CA 91109
> tel: (818)354-2075
> fax: (818)393-6875
>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf
mailing list