[Beowulf] /. Cooler room or cooler servers?

Eugen Leitl eugen at leitl.org
Thu Apr 7 02:42:26 PDT 2005


http://searchdatacenter.techtarget.com/originalContent/0,289142,sid80_gci1076392,00.html

(Some (not entirely idiotic) discussion on
<http://it.slashdot.org/it/05/04/06/1357227.shtml?tid=126&tid=218> )

 	 	
Cooler rooms or cooler servers?
By Matt Stansberry, News Editor
06 Apr 2005 | SearchDataCenter.com
	

Data center managers are packing more computing power into smaller
footprints. But today's racks and blades produce massive amounts of heat per
square foot in comparison with those old tower servers. Left unchecked, this
heat can cause an IT meltdown.
	

So is the answer to the escalating heat problem in the data center better
designed servers or better designed rooms? Are IT shops favoring one approach
over the other? And as you plan your next server refresh, what should you be
looking for?

Time for hardware innovation

Advances in server technology may give data center managers a little room to
breathe. Innovations from chip technology to server construction are
improving hardware's capability to deal with high-density environments.

According to Gordon Haff, analyst with Nashua, N.H.-based Illuminata, the
days of using traditional server cooling methods are over. Companies
				
 Water is going to come back into the data center. The only question is when,
 and for what purpose.
 Robert E. McFarlane
 president, Interport Financial Division, Shen, Milsom & Wilke Inc.
				
	
going forward with dense server environments will need to use every little
piece to get closer to its goals. Upshot: A cooler server starts with the
processor.

Better chips

Multi-core processing is one technology with the potential to reduce data
center cooling requirements. In principle, multi-core processors could
operate at a lower frequency, using less power to achieve today's computing
levels, thereby running cooler.

Dual-core technology has been around since the 1990s, but AMD Inc., in
Sunnyvale, Calif. and Intel Corp., in Santa Clara, Calif., have pushed it
into the x86 market recently. The Intel offering is scheduled to be available
early in 2006, while the beta versions of the AMD product have been shipping
since January 2005.

Haff predicts that further out, as multi-core processing develops, the
possibility of operating at lower frequency levels is more likely, but
current dual-core chips will not inspire people to lower usage levels.

"We've learned to live with where we are regarding server heat issues," Haff
said. "People are going to run dual core processors at the peak power levels
to get increased performance, rather than running them at half power to get
the power of today's CPUs."

Another new chip technology designed to help servers run cool is demand-based
switching. Intel's 64-bit Xeon chips use this feature which allows systems to
be throttled down when not in use. According to server vendors, the
technology has the capability to save customers 24% annually in power costs.

A step up from chips, the server design also plays a role in heat issues. IBM
is a leader in cooling innovation, though many of its cooling features aren't
new, since they often trickle down from the mainframe.

Fewer parts, cooler server

Calibrated vectored cooling (CVC) is an example. CVC optimizes the path of
cooled air flow through the system, allowing servers to use fewer fans and
less power. It directly channels refrigerated air through the hottest parts
of the server. IBM recently offered CVC for its xSeries and blades. CVC
technology for blades had allowed IBM to launch the first Xeon-based blade
product.

Where blades are concerned, it's not just IBM stepping up to the plate with
cooling strategies. Almost every blade manufacturer has been forced to deal
with cooling issues, including Marlboro-Mass.-based Egenera.

Egenera was founded in 2000 by Vern Brownell, former chief technology officer
of Goldman Sachs. The company has focused solely on blade manufacturing and
has become a major player in that market in a very short time.

According to Susan Davis, Egenera vice president of marketing and product
management, data center managers don't always have the luxury

Read more

Avoid a tug of war with the facilities department

Good planning can ease space concerns
	
of designing a state of the art server room. Since so many data centers are
limited to working with available resources, Davis feels better server design
can have a broader impact.

Egenera blades are all processor and memory. There are no disk drives,
connector slots or NIC cards to block airflow. None of the extraneous
hardware is included in the actual server. According to Egenera, eliminating
as many components as possible allows direct flow to critical areas.

When data center design is key

Despite vendors' efforts to help alleviate the data center heat wave, some
experts predict specialized cooling and room engineering will be necessary to
meet computing demand.

"We are very close to the limit of how much more energy-efficient we can make
CMOS [complementary metal-oxide semiconductor -- used in the transistors of
most microchips] technology, and there is nothing anyone sees on the horizon
in the way of a technical breakthrough to replace it with much more
energy-efficient devices," said Robert E. McFarlane, president of the
Interport Financial Division of New York-based Shen, Milsom & Wilke Inc.
"Therefore, as compute power goes up and is crammed into smaller and smaller
spaces, the stuff is simply going to get hotter."

Cool waters run deep

So what is the engineering solution people are talking about? According to
Charles King, principal analyst with Hayward Calif.-based Pund-IT research,
it's nothing new. In fact, the idea was cutting edge thirty years ago.

Some experts have proposed moving towards liquid cooling, much like the old
Cray systems -- the liquid cooled supercomputers from the 1970s.

"Water is going to come back into the data center. The only question is when,
and for what purpose," said McFarlane said. "Roger Schmidt, chief
thermodynamics engineer at IBM, [recently] admitted that, while everyone
knows servers are one day going to be water-cooled, no one wants to be first,
believing that if their competitors still claim they are fine with air
cooling, the guy who goes to water cooling will rapidly drop back in sales
until others admit it is necessary."

King agrees. "Vendors are going to do everything they can to avoid going to
liquid cooled systems. It makes everything more complex."

Cold air, warm parts

But conventional under-floor air can effectively cool hardware only to a
certain point.

"Even if data center managers could get enough cold air under the floor to
equipment locations, getting it evenly up the heights of cabinets is another
problem," said McFarlane said. "And dealing with the problem of keeping
return air from mixing with the cold air is even more difficult, both to
predict and to accomplish."

Therefore, while it is important to design new data centers with every
possible technique for maximizing conventional performance, McFarlane
predicts much of what is coming or already here is going to require localized
cooling techniques.

These options include specialized enclosures, such as West Kingston,
R.I.-based APC promotes with their InfraStruXure line. These systems include
power, cooling, and environmental management within a rack. According to APC,
this can even eliminate the need for raised floors in many applications.

Other options include overhead spot cooling or liquid-cooled cabinets such as
what the Columbus, Ohio-based Liebert Corp.offers. The Liebert X-treme
Density heat removal system for example uses overhead fans and a waterless
refrigerant pump to maintain safe rack temperatures.

In the battle against IT's warming trend, the value and limitations of each
vendor and approach need to be weighed. According to Charles King, IT
professionals can be much more proactive in data center design than they can
on the manufacturing end.

Other experts may disagree, arguing that cooler servers are vital for
customers to keep buying new generation servers and making them co-exist
within shrinking confines.

But either way, these approaches aren't mutually exclusive. And data center
managers would be wise to employ every advantage possible to protect hardware
from a meltdown. 

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a>
______________________________________________________________
ICBM: 48.07078, 11.61144            http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
http://moleculardevices.org         http://nanomachines.net
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20050407/a5455b44/attachment.sig>


More information about the Beowulf mailing list