[Beowulf] DARPA issues 20 MUSD grant to nVidia to go from 1 GFLOPS/Watt to 75 GFLOPS/Watt

Vincent Diepeveen diep at xs4all.nl
Mon Dec 17 11:34:39 PST 2012


On Dec 17, 2012, at 8:15 PM, Lux, Jim (337C) wrote:

> I wasn't thinking so much about code efficiency, more "wall plug  
> power" efficiency.  The board may consume 250W, but it will take  
> non-zero power to support that board, and then the power supply  
> efficiency needs to be taken into account.  But I suspect the 1  
> GFLOP/W was more just an "old" "rounded off" number.

Considering how the article was written i doubt that's all inside the  
calculation. Nvidia delivers a card which eats X watts and that  
determines its efficiency.

If Darpa believes the 1 gflop/watt as todays standard.
Note that they mention explicit 1 gflop/watt at 28 nm and at 28 nm it's
actually 6 gflops/watt as shown.

Speaking of power supply efficiencies - they improved - at those  
loads they can be easily up to 94%.
Usually they put in a $20 rackmount psu though.

Checkout the latest Corsair PSU's. Based upon the new Seasonic  
platinum design.

Most of the psu's used nowadays in rackmounts do get around 90%  
though at 50% load.

>
> Yes... it's very hard work to get to a real 75 GFLOP/Watt, but that  
> is what DARPA is all about... High Risk, High Reward.  Somehow,  
> though, I can't see building a new fab with smaller feature sizes  
> for the paltry sum of 20M.

Nvidia's gpu's usually get produced at TSMC, so Nvidia in fact  
doesn't need to worry at all about where to produce. That's TSMC's  
problem.

Yeah intel had projected already years ago that such factories would  
cost by the year 2020, not adjusted for inflation in fact,
20 billion dollar a factory.

They could be right.

Even AMD's factory had a cost of 4.66 billion dollar and i bet that  
the state in question (New York), took the cost on it of near free  
building ground and other billions of dollars offers to get them  
build that factory there.


> More like they'll do some architecture studies, a pile o'modeling  
> (if we DID invest $1B in a new fab, here's what you might be able  
> to do), and do a bunch of work on things like failure tolerant  
> architectures (if you have a sea of processors, and X% are dead at  
> any given time, how do you write software to run on that sea)
>
> I wonder what Nvidia chips are used in Audis and BMWs?

Maybe some 10k Tesla's or similar in a datacenter in Munich :)

Would be amazed if a single Tegra SoC gets used, as those are not so  
dirt cheap (so do not qualify actually for the car industry).
Maybe in the numbers the car industry uses them, Nvidia offered it  
cheap though.

> The video display

> , perhaps: there's a nifty 3D rendered view of the GPS mapping info  
> in the new BMWs?  I don't see a real need for that kind of  
> horsepower in an Engine Control Unit.  Maybe in a smart cruise  
> control that does station keeping, or in a collision avoidance  
> system.  Actually, I don't really see Nvidia being in the "safety  
> critical" space at all.

Each car has a 100 cpu's or so!

>
>
>
>
> Jim Lux
>
>
> -----Original Message-----
> From: beowulf-bounces at beowulf.org [mailto:beowulf- 
> bounces at beowulf.org] On Behalf Of Vincent Diepeveen
> Sent: Monday, December 17, 2012 10:02 AM
> To: Lux, Jim (337C)
> Cc: Beowulf at beowulf.org
> Subject: Re: [Beowulf] DARPA issues 20 MUSD grant to nVidia to go  
> from 1 GFLOPS/Watt to 75 GFLOPS/Watt
>
> On Dec 17, 2012, at 6:27 PM, Lux, Jim (337C) wrote:
>
>> That could be a notional 1 GFLOP/Watt in a fielded system.
>
> Even linpack is 70% - 80% efficient on this so should get out oh  
> let's use a conservative 4.5 flops/watt effectively at codes.
>
>
>
>
>
> Note that (to my big surprise) it seems to be the case that the  
> gpu's are effectively getting higher efficiency than Xeon Phi here.
>
>> The original documents for PERFECT are probably a year or two old by
>> now.. but what DARPA is looking for is a nearly 2 order of magnitude
>> improvement...  Whether they started at 1 or 1.4 or 6 really doesn't
>> make much difference to what they're looking for.
>>
>
> Yeah well that 2 orders of a magnitude is just 1 order of a  
> magnitude if we start at 6.
>
> 6 ==> 75 = factor 12
>
> They speak about 7 nm technology in the accompanying document.  
> That's a very conservative estimate, obviously in theory even with  
> todays 2 dimensional way of building (not to mention when things  
> really get 3d), we speak of a difference in theory of:
>
> (28 / 7) ^ 2 = 4^2 = 16
>
> Given enough time, engineers will get that factor 16 easily out of  
> transition over the years from 28/32 nm to 7 nm.
> Note that 7nm is still far beyond the horizon.
>
> However if they would have needed to improve current design factor  
> 75 moving from 28/32 nm they use today to 7 nm, that would be a  
> complicated bet.
>
>> In any case, it's a long way from a manufacturer's cut sheet to a
>> system installed in a tank bouncing through the woods..
>>
>>
>> Jim Lux
>>
>> -----Original Message-----
>> From: beowulf-bounces at beowulf.org [mailto:beowulf-
>> bounces at beowulf.org] On Behalf Of Vincent Diepeveen
>> Sent: Monday, December 17, 2012 5:50 AM
>> To: Eugen Leitl
>> Cc: Beowulf at beowulf.org; info at postbiota.org
>> Subject: Re: [Beowulf] DARPA issues 20 MUSD grant to nVidia to go  
>> from
>> 1 GFLOPS/Watt to 75 GFLOPS/Watt
>>
>> "todays 1 gflop/watt" ?
>>
>> The K20X delivers 1.4 Tflop nearly.
>> If i google it's 235 watt TDP.
>>
>> 1.4 Tflop / 235 =  6 gflops/watt
>>
>> On Dec 17, 2012, at 2:21 PM, Eugen Leitl wrote:
>>
>>>
>>> http://www.networkworld.com/community/blog/darpa-awards-20m-nvidia-
>>> stretch-achilles-heel-advanced-computing-power
>>>
>>>
>>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin  
> Computing To change your subscription (digest mode or unsubscribe)  
> visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list