HP create off-the-shelf supercomputer
Eugene Leitl
Eugene.Leitl at lrz.uni-muenchen.de
Fri Oct 5 03:33:09 PDT 2001
HP creates off-the-shelf supercomputer
By CNET News.com Staff
October 4, 2001, 10:50 a.m. PT
By Matthew Broersma
http://news.cnet.com/news/0-1003-200-7409795.html?tag=mn_hd
How to build your own supercomputer: Take a few off-the-shelf,
stripped-down PCs, add some network switches, a maze of Ethernet cabling
and some homegrown Linux software, and you'll be well on your way.
Hewlett-Packard, together with a national laboratory in France, tried this
recipe out and, to the great surprise of many scientists, it worked. What
they ended up with is the "I-Cluster," a Mandrake Linux-powered cluster of
225 PCs that has benchmarked its way into the list of the top 500 most
powerful computers in the world.
At a technical session last summer, scientists from HP's labs in Grenoble,
France, started talking to experts at the local INRIA Rhone-Alps (France's
National Institute for Research in Computer Science) about the possibility
of doing "something a little unusual." The idea was to build a
supercomputer out of standard hardware components like those that might be
found in the typical big business.
They started with 100 of Hewlett-Packard's e-PCs--simplified PCs with
reduced expandability--and finally worked up to the present configuration
of 225 nodes, which is near the cluster's physical limit.
HP and INRIA showed the system to journalists for the first time
Wednesday.
The version of the e-PC used for I-Cluster is sealed, meaning no hardware
tweaks can be made, and the experiment uses standard networking equipment.
This means that, unlike other clustered supercomputing projects, a
business could use the I-Cluster method to draw on idle computing power
from around the company network to carry out computing-intensive tasks.
"These are really standard machines; we didn't even open the box," said
Bruno Richard, program manager with HP Labs Grenoble.
Other clusters, like ASCI Red at Sandia National Laboratories in New
Mexico, are made up of heavily modified parts.
The hard part
There were formidable obstacles in getting the cluster running as if it
were one device, Richard said, such as distributing functions like storage
and network caching to general-purpose devices and managing and
programming the cluster.
"Our previous cluster was 12 machines," he said. "When you have 200, you
have to rethink everything."
For example, making even simple software changes became a difficult task
with so many machines to be altered. In the end, however, the technicians
devised tools capable of reinstalling every machine in the cluster from
scratch in about 12 minutes, according to Richard.
The researchers plan to release the tools they developed as open-source
software for anyone who might want to build a supercomputer themselves.
The whole project, minus network cabling, cost about $210,000.
The individual machines that made up the I-Cluster are now out of date,
each running on 733MHz Pentium III processors with 256MB of RAM and a 15GB
hard drive. HP introduced a faster version at the beginning of this month
and will launch a Pentium 4 e-PC by the end of the year.
e-PC features like super-quiet cooling and low power consumption,
originally designed for the corporate buyer, proved useful in the
supercomputing environment too--the cluster runs surprisingly quietly and
doesn't require anything more than standard air conditioning to keep it
cool.
As ranked by standard benchmarks, I-Cluster is ranked 385th worldwide for
supercomputing. Richard said the experiment showed that there is a linear
relationship between the number of nodes and performance, meaning that
it's relatively simple to add or remove computing power depending on the
task.
About 60 research teams worldwide are working on the system, with half
running typical supercomputing tasks and the other half exploring how
I-Cluster works.
Serious computing
The project shows that standard computing power--like the unused
processing power on an office network--can be harnessed for serious
computing work. In the business world, CAD designers and chemists are
among those who need intensive computing power, Richard said.
"You could gather the latent power from office PCs using this technique,"
he said. "We eventually want to scale it higher, to thousands of PCs."
Currently the hard limit for such a cluster is about 256 nodes, because of
switching capacity, but that could be surpassed by linking several
clusters that are physically near each other.
A more daunting task might be taking the model to a consumer environment,
which, Richard pointed out, is full of often dormant processors like those
in printers and DVD players.
HP imagines "clouds" of devices, or "virtual entities," which could
discover and use the resources around a user. Richard said that
supercomputing power could come in handy for certain tasks, like
converting large video files from one format to another, that currently
take a good amount of patience.
Other scientists predict that the practical difficulties of such a home
network will prove difficult to solve. Brigitte Plateau, head of INRIA's
Apache parallel computing project, said consumer need for such power
probably wouldn't make it worth the effort that such a system would
require.
"It is more likely that you would see an external service," she said.
HP's Richard said the use of Linux--version 7 of Mandrake's distribution,
in this case--was important because low-level changes could be made easily
to the software, and then the alterations could be shared freely with
other scientists, something that would have required a special agreement
with Microsoft if Windows had been used.
Plateau, whose Apache project encompasses I-Cluster, said the lab is also
working with Microsoft to port parallel computing applications to Windows.
"We had to face heterogeneity by spreading it over Linux and Windows too,"
she said. "It's not scientific, but technically it's good experience."
More information about the Beowulf
mailing list