[Beowulf] Inside Tsubame - the Nvidia GPU supercomputer

Igor Kozin i.n.kozin at googlemail.com
Fri Dec 12 11:58:58 PST 2008


23.55 Mflops/W according to green500 estimates (#488 in thier list)

2008/12/12 Vincent Diepeveen <diep at xs4all.nl>

>
> On Dec 12, 2008, at 8:56 AM, Eugen Leitl wrote:
>
>
>> http://www.goodgearguide.com.au/article/270416/inside_tsubame_-
>> _nvidia_gpu_supercomputer?fp=&fpid=&pf=1
>>
>> Inside Tsubame - the Nvidia GPU supercomputer
>>
>> Tokyo Tech University's Tsubame supercomputer attained 29th ranking in the
>> new Top 500, thanks in part to hundreds of Nvidia Tesla graphics cards.
>>
>> Martyn Williams (IDG News Service) 10/12/2008 12:20:00
>>
>> When you enter the computer room on the second floor of Tokyo Institute of
>> Technology's computer building, you're not immediately struck by the size
>> of
>> Japan's second-fastest supercomputer. You can't see the Tsubame computer
>> for
>> the industrial air conditioning units that are standing in your way, but
>> this
>> in itself is telling. With more than 30,000 processing cores buzzing away,
>> the machine consumes a megawatt of power and needs to be kept cool.
>>
>>
> 1000000 watt / 77480 gflop = 12.9 watt per gflop.
>
> If you run double precision codes on this box it is a big energy waster
> IMHO.
> (of course it's very well equipped for all kind of crypto codes using that
> google library).
>
> Vincent
>
>
>  Tsubame was ranked 29th-fastest supercomputer in the world in the latest
>> Top
>> 500 ranking with a speed of 77.48T Flops (floating point operations per
>> second) on the industry-standard Linpack benchmark.
>>
>> While its position is relatively good, that's not what makes it so
>> special.
>> The interesting thing about Tsubame is that it doesn't rely on the raw
>> processing power of CPUs (central processing units) alone to get its work
>> done. Tsubame includes hundreds of graphics processors of the same type
>> used
>> in consumer PCs, working alongside CPUs in a mixed environment that some
>> say
>> is a model for future supercomputers serving disciplines like material
>> chemistry.
>>
>> Graphics processors (GPUs) are very good at quickly performing the same
>> computation on large amounts of data, so they can make short work of some
>> problems in areas such as molecular dynamics, physics simulations and
>> image
>> processing.
>>
>> "I think in the vast majority of the interesting problems in the future,
>> the
>> problems that affect humanity where the impact comes from nature ...
>> requires
>> the ability to manipulate and compute on a very large data set," said
>> Jen-Hsun Huang, CEO of Nvidia, who spoke at the university this week.
>> Tsubame
>> uses 680 of Nvidia's Tesla graphics cards.
>>
>> Just how much of a difference do the GPUs make? Takayuki Aoki, a professor
>> of
>> material chemistry at the university, said that simulations that used to
>> take
>> three months now take 10 hours on Tsubame.
>>
>> Tsubame itself - once you move past the air-conditioners - is split across
>> several rooms in two floors of the building and is largely made up of
>> rack-mounted Sun x4600 systems. There are 655 of these in all, each of
>> which
>> has 16 AMD Opteron CPU cores inside it, and Clearspeed CSX600 accelerator
>> boards.
>>
>> The graphics chips are contained in 170 Nvidia Tesla S1070 rack-mount
>> units
>> that have been slotted in between the Sun systems. Each of the 1U Nvidia
>> systems has four GPUs inside, each of which has 240 processing cores for a
>> total of 960 cores per system.
>>
>> The Tesla systems were added to Tsubame over the course of about a week
>> while
>> the computer was operating.
>>
>> "People thought we were crazy," said Satoshi Matsuoka, director of the
>> Global
>> Scientific Information and Computing Center at the university. "This is a
>> ¥1
>> billion (US$11 million) supercomputer consuming a megawatt of power, but
>> we
>> proved technically that it was possible."
>>
>> The result is what university staff call version 1.2 of the Tsubame
>> supercomputer.
>>
>> "I think we should have been able to achieve 85 [T Flops], but we ran out
>> of
>> time so it was 77 [T Flops]," said Matsuoka of the benchmarks performed on
>> the system. At 85T Flops it would have risen a couple of places in the Top
>> 500 and been ranked fastest in Japan.
>>
>> There's always next time: A new Top 500 list is due out in June 2009, and
>> Tokyo Institute of Technology is also looking further ahead.
>>
>> "This is not the end of Tsubame, it's just the beginning of GPU
>> acceleration
>> becoming mainstream," said Matsuoka. "We believe that in the world there
>> will
>> be supercomputers registering several petaflops in the years to come, and
>> we
>> would like to follow suit."
>>
>> Tsubame 2.0, as he dubbed the next upgrade, should be here within the next
>> two years and will boast a sustained performance of at least a petaflop (a
>> petaflop is 1,000 teraflops), he said. The basic design for the machine is
>> still not finalized but it will continue the heterogeneous computing base
>> of
>> mixing CPUs and GPUs, he said.
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20081212/504a9591/attachment.html>


More information about the Beowulf mailing list