23.55 Mflops/W according to green500 estimates (#488 in thier list)<br><br><div class="gmail_quote">2008/12/12 Vincent Diepeveen <span dir="ltr"><<a href="mailto:diep@xs4all.nl">diep@xs4all.nl</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d"><br>
On Dec 12, 2008, at 8:56 AM, Eugen Leitl wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<a href="http://www.goodgearguide.com.au/article/270416/inside_tsubame_-" target="_blank">http://www.goodgearguide.com.au/article/270416/inside_tsubame_-</a>_nvidia_gpu_supercomputer?fp=&fpid=&pf=1<br>
<br>
Inside Tsubame - the Nvidia GPU supercomputer<br>
<br>
Tokyo Tech University's Tsubame supercomputer attained 29th ranking in the<br>
new Top 500, thanks in part to hundreds of Nvidia Tesla graphics cards.<br>
<br>
Martyn Williams (IDG News Service) 10/12/2008 12:20:00<br>
<br>
When you enter the computer room on the second floor of Tokyo Institute of<br>
Technology's computer building, you're not immediately struck by the size of<br>
Japan's second-fastest supercomputer. You can't see the Tsubame computer for<br>
the industrial air conditioning units that are standing in your way, but this<br>
in itself is telling. With more than 30,000 processing cores buzzing away,<br>
the machine consumes a megawatt of power and needs to be kept cool.<br>
<br>
</blockquote>
<br></div>
1000000 watt / 77480 gflop = 12.9 watt per gflop.<br>
<br>
If you run double precision codes on this box it is a big energy waster IMHO.<br>
(of course it's very well equipped for all kind of crypto codes using that google library).<br><font color="#888888">
<br>
Vincent</font><div><div></div><div class="Wj3C7c"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Tsubame was ranked 29th-fastest supercomputer in the world in the latest Top<br>
500 ranking with a speed of 77.48T Flops (floating point operations per<br>
second) on the industry-standard Linpack benchmark.<br>
<br>
While its position is relatively good, that's not what makes it so special.<br>
The interesting thing about Tsubame is that it doesn't rely on the raw<br>
processing power of CPUs (central processing units) alone to get its work<br>
done. Tsubame includes hundreds of graphics processors of the same type used<br>
in consumer PCs, working alongside CPUs in a mixed environment that some say<br>
is a model for future supercomputers serving disciplines like material<br>
chemistry.<br>
<br>
Graphics processors (GPUs) are very good at quickly performing the same<br>
computation on large amounts of data, so they can make short work of some<br>
problems in areas such as molecular dynamics, physics simulations and image<br>
processing.<br>
<br>
"I think in the vast majority of the interesting problems in the future, the<br>
problems that affect humanity where the impact comes from nature ... requires<br>
the ability to manipulate and compute on a very large data set," said<br>
Jen-Hsun Huang, CEO of Nvidia, who spoke at the university this week. Tsubame<br>
uses 680 of Nvidia's Tesla graphics cards.<br>
<br>
Just how much of a difference do the GPUs make? Takayuki Aoki, a professor of<br>
material chemistry at the university, said that simulations that used to take<br>
three months now take 10 hours on Tsubame.<br>
<br>
Tsubame itself - once you move past the air-conditioners - is split across<br>
several rooms in two floors of the building and is largely made up of<br>
rack-mounted Sun x4600 systems. There are 655 of these in all, each of which<br>
has 16 AMD Opteron CPU cores inside it, and Clearspeed CSX600 accelerator<br>
boards.<br>
<br>
The graphics chips are contained in 170 Nvidia Tesla S1070 rack-mount units<br>
that have been slotted in between the Sun systems. Each of the 1U Nvidia<br>
systems has four GPUs inside, each of which has 240 processing cores for a<br>
total of 960 cores per system.<br>
<br>
The Tesla systems were added to Tsubame over the course of about a week while<br>
the computer was operating.<br>
<br>
"People thought we were crazy," said Satoshi Matsuoka, director of the Global<br>
Scientific Information and Computing Center at the university. "This is a ¥1<br>
billion (US$11 million) supercomputer consuming a megawatt of power, but we<br>
proved technically that it was possible."<br>
<br>
The result is what university staff call version 1.2 of the Tsubame<br>
supercomputer.<br>
<br>
"I think we should have been able to achieve 85 [T Flops], but we ran out of<br>
time so it was 77 [T Flops]," said Matsuoka of the benchmarks performed on<br>
the system. At 85T Flops it would have risen a couple of places in the Top<br>
500 and been ranked fastest in Japan.<br>
<br>
There's always next time: A new Top 500 list is due out in June 2009, and<br>
Tokyo Institute of Technology is also looking further ahead.<br>
<br>
"This is not the end of Tsubame, it's just the beginning of GPU acceleration<br>
becoming mainstream," said Matsuoka. "We believe that in the world there will<br>
be supercomputers registering several petaflops in the years to come, and we<br>
would like to follow suit."<br>
<br>
Tsubame 2.0, as he dubbed the next upgrade, should be here within the next<br>
two years and will boast a sustained performance of at least a petaflop (a<br>
petaflop is 1,000 teraflops), he said. The basic design for the machine is<br>
still not finalized but it will continue the heterogeneous computing base of<br>
mixing CPUs and GPUs, he said.<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote>
<br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</div></div></blockquote></div><br>