[Beowulf] /. Cooler room or cooler servers?

Mark Hahn hahn at physics.mcmaster.ca
Thu Apr 7 14:33:18 PDT 2005


> (Some (not entirely idiotic) discussion on

faint praise (and justly so!)  I'm feeling surly, so I'll whine about this:

> Data center managers are packing more computing power into smaller
> footprints. But today's racks and blades produce massive amounts of heat per
> square foot in comparison with those old tower servers. Left unchecked, this

"tower"?  kind of hard to tell what the author is referring to - 
desktop-type tower PCs?

but it's true, and often pointless.  I attended a meeting today where we 
decided not to take a blade approach - it would have been very dense,
but ridiculously difficult to cool (30 KW or so in a rack, forget about it!)

the real point is that density is not the goal.  getting the job done well
and cost-effectively is...

> Multi-core processing is one technology with the potential to reduce data
> center cooling requirements. In principle, multi-core processors could
> operate at a lower frequency, using less power to achieve today's computing
> levels, thereby running cooler.

grrr, the real power-saver is to run at lower voltage.  that is not 
independent of frequency, of course, but voltage is the driver here.

> into the x86 market recently. The Intel offering is scheduled to be available
> early in 2006, while the beta versions of the AMD product have been shipping
> since January 2005.

hmm, Intel gives all the signs of shipping DC for servers well before 2006
of course, they're outrageously hot :(

> be throttled down when not in use. According to server vendors, the
> technology has the capability to save customers 24% annually in power costs.

kind of interesting these authors assume that servers don't have 24x7 load.

> Fewer parts, cooler server
> 
> Calibrated vectored cooling (CVC) is an example. CVC optimizes the path of
> cooled air flow through the system, allowing servers to use fewer fans and
> less power. It directly channels refrigerated air through the hottest parts
> of the server. IBM recently offered CVC for its xSeries and blades. CVC
> technology for blades had allowed IBM to launch the first Xeon-based blade
> product.

hmm, engineered cooling is great, but it doesn't decrease power dissipation.

> Egenera blades are all processor and memory. There are no disk drives,
> connector slots or NIC cards to block airflow. None of the extraneous
> hardware is included in the actual server. According to Egenera, eliminating
> as many components as possible allows direct flow to critical areas.

again, better airflow is a great thing, but doesn't reduce dissipation!

> But conventional under-floor air can effectively cool hardware only to a
> certain point.

but it's simply not inevitable that we must go past that point.  sure, it's 
neato to have ~200-400 cpus in a rack, but is your floorspace really that
expensive?

> But either way, these approaches aren't mutually exclusive. And data center
> managers would be wise to employ every advantage possible to protect hardware
> from a meltdown. 

jeez.




More information about the Beowulf mailing list