[Beowulf] 3.79 TFlops sp, 0.95 TFlops dp, 264 TByte/s, 3 GByte, 198 W @ 500 EUR

Lux, Jim (337C) james.p.lux at jpl.nasa.gov
Thu Dec 22 08:27:46 PST 2011

The problem with FPGAs (and I use a fair number of them) is that you're
never going to get the same picojoules/bit transition kind of power
consumption that you do with a purpose designed processor.  The extra
logic needed to get it "reconfigurable", and the physical junction sizes
as well, make it so.

What you will find is that on certain kinds of problems, you can implement
a more efficient algorithm in FPGA than you can in a conventional
processor or GPU.  So, for that class of problem, the FPGA is a winner
(things lending themselves to fixed point systolic array type processes
are a good candidate).

Bear in mind also that while an FPGA may have, say, 10-million gate
equivalent, any given practical design is going to use a small fraction of
those gates.  Fortunately, most of those unused gates aren't toggling, so
they don't consume clock related power, but they do consume leakage
current, so the whole clock rate vs core voltage trade winds up a bit
different for FPGAs.

The biggest problem with FPGAs is that they are difficult to write high
performance software for.  With FORTRAN on conventional and vectorized and
pipelined processors, we've got 50 years of compiler writing expertise,
and real high performance libraries.   And, literally millions of people
who know how to code in FORTRAN or C or something, so if you're looking
for the highest performance coders, even at the 4 sigma level, you've got
a fair number to choose from.  For numerical computation in FPGAs, not so
many. I'd guess that a large fraction of FPGA developers are doing one of
two things: 1) digital signal processing, flow through kinds of stuff
(error correcting codes, compression/decompression, crypto; 2) bus
interface and data handling (PCI bus, disk drive controls, etc.).

Interestingly, even with the relative scarcity of FPGA developers versus
conventional CPU software, the average salaries aren't that far apart.
The distribution on "generic coders" is wider (particularly on the low
end.. Barriers to entry are lower for C,Java,whathaveyou code monkeys),
but there are very, very few people making more than, say, 150-200k/yr
doing either.  (except in a few anomalous industries, where compensation
is higher than normal in general).  (also leaving out "equity
participation" type deals)

On 12/22/11 7:42 AM, "Prentice Bisbal" <prentice at ias.edu> wrote:

>On 12/22/2011 09:57 AM, Eugen Leitl wrote:
>> On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote:
>>> Or if your German is rusty:
>> Wonder what kind of response will be forthcoming from nVidia,
>> given developments like
>> It does seem that x86 is dead, despite good Bulldozer performance
>> in Interlagos
>> (engage dekrautizer of your choice).
>At SC11, it was clear that everyone was looking for ways around the
>power wall. I saw 5 or 6 different booths touting the use of FPGAs for
>improved performance/efficiency. I don't remember there being a single
>FPGA booth in the past. Whether the accelerator is GPU, FPGA, GRAPE,
>Intem MIC, or something else,  I think it's clear that the future of HPC
>architecture is going to change radically in the next couple years,
>unless some major breakthrough occurs for commodity processors.
>I think DE Shaw Research's Anton computer, which uses FPGAs and custom
>processors, is an excellent example of what the future of HPC might look
>Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list