<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7653.38">
<TITLE>RS: [Beowulf] Inside Tsubame - the Nvidia GPU supercomputer</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<BR>
<BR>
<P><FONT SIZE=2>Very interesting, but perhaps a bit of an overkill. How many TFlop/Watt does that figure out as? :-(<BR>
<BR>
Cheers,<BR>
-Alan<BR>
<BR>
<BR>
-----Missatge original-----<BR>
De: beowulf-bounces@beowulf.org en nom de Eugen Leitl<BR>
Enviat el: dv. 12/12/2008 08:56<BR>
Per a: info@postbiota.org; Beowulf@beowulf.org<BR>
Tema: [Beowulf] Inside Tsubame - the Nvidia GPU supercomputer<BR>
<BR>
<BR>
<A HREF="http://www.goodgearguide.com.au/article/270416/inside_tsubame_-_nvidia_gpu_supercomputer?fp=&fpid=&pf=1">http://www.goodgearguide.com.au/article/270416/inside_tsubame_-_nvidia_gpu_supercomputer?fp=&fpid=&pf=1</A><BR>
<BR>
Inside Tsubame - the Nvidia GPU supercomputer<BR>
<BR>
Tokyo Tech University's Tsubame supercomputer attained 29th ranking in the<BR>
new Top 500, thanks in part to hundreds of Nvidia Tesla graphics cards.<BR>
<BR>
Martyn Williams (IDG News Service) 10/12/2008 12:20:00<BR>
<BR>
When you enter the computer room on the second floor of Tokyo Institute of<BR>
Technology's computer building, you're not immediately struck by the size of<BR>
Japan's second-fastest supercomputer. You can't see the Tsubame computer for<BR>
the industrial air conditioning units that are standing in your way, but this<BR>
in itself is telling. With more than 30,000 processing cores buzzing away,<BR>
the machine consumes a megawatt of power and needs to be kept cool.<BR>
<BR>
Tsubame was ranked 29th-fastest supercomputer in the world in the latest Top<BR>
500 ranking with a speed of 77.48T Flops (floating point operations per<BR>
second) on the industry-standard Linpack benchmark.<BR>
<BR>
While its position is relatively good, that's not what makes it so special.<BR>
The interesting thing about Tsubame is that it doesn't rely on the raw<BR>
processing power of CPUs (central processing units) alone to get its work<BR>
done. Tsubame includes hundreds of graphics processors of the same type used<BR>
in consumer PCs, working alongside CPUs in a mixed environment that some say<BR>
is a model for future supercomputers serving disciplines like material<BR>
chemistry.<BR>
<BR>
Graphics processors (GPUs) are very good at quickly performing the same<BR>
computation on large amounts of data, so they can make short work of some<BR>
problems in areas such as molecular dynamics, physics simulations and image<BR>
processing.<BR>
<BR>
"I think in the vast majority of the interesting problems in the future, the<BR>
problems that affect humanity where the impact comes from nature ... requires<BR>
the ability to manipulate and compute on a very large data set," said<BR>
Jen-Hsun Huang, CEO of Nvidia, who spoke at the university this week. Tsubame<BR>
uses 680 of Nvidia's Tesla graphics cards.<BR>
<BR>
Just how much of a difference do the GPUs make? Takayuki Aoki, a professor of<BR>
material chemistry at the university, said that simulations that used to take<BR>
three months now take 10 hours on Tsubame.<BR>
<BR>
Tsubame itself - once you move past the air-conditioners - is split across<BR>
several rooms in two floors of the building and is largely made up of<BR>
rack-mounted Sun x4600 systems. There are 655 of these in all, each of which<BR>
has 16 AMD Opteron CPU cores inside it, and Clearspeed CSX600 accelerator<BR>
boards.<BR>
<BR>
The graphics chips are contained in 170 Nvidia Tesla S1070 rack-mount units<BR>
that have been slotted in between the Sun systems. Each of the 1U Nvidia<BR>
systems has four GPUs inside, each of which has 240 processing cores for a<BR>
total of 960 cores per system.<BR>
<BR>
The Tesla systems were added to Tsubame over the course of about a week while<BR>
the computer was operating.<BR>
<BR>
"People thought we were crazy," said Satoshi Matsuoka, director of the Global<BR>
Scientific Information and Computing Center at the university. "This is a ¥1<BR>
billion (US$11 million) supercomputer consuming a megawatt of power, but we<BR>
proved technically that it was possible."<BR>
<BR>
The result is what university staff call version 1.2 of the Tsubame<BR>
supercomputer.<BR>
<BR>
"I think we should have been able to achieve 85 [T Flops], but we ran out of<BR>
time so it was 77 [T Flops]," said Matsuoka of the benchmarks performed on<BR>
the system. At 85T Flops it would have risen a couple of places in the Top<BR>
500 and been ranked fastest in Japan.<BR>
<BR>
There's always next time: A new Top 500 list is due out in June 2009, and<BR>
Tokyo Institute of Technology is also looking further ahead.<BR>
<BR>
"This is not the end of Tsubame, it's just the beginning of GPU acceleration<BR>
becoming mainstream," said Matsuoka. "We believe that in the world there will<BR>
be supercomputers registering several petaflops in the years to come, and we<BR>
would like to follow suit."<BR>
<BR>
Tsubame 2.0, as he dubbed the next upgrade, should be here within the next<BR>
two years and will boast a sustained performance of at least a petaflop (a<BR>
petaflop is 1,000 teraflops), he said. The basic design for the machine is<BR>
still not finalized but it will continue the heterogeneous computing base of<BR>
mixing CPUs and GPUs, he said.<BR>
_______________________________________________<BR>
Beowulf mailing list, Beowulf@beowulf.org<BR>
To change your subscription (digest mode or unsubscribe) visit <A HREF="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</A><BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>