[Beowulf] [landman at scalableinformatics.com: Re: [Bioclusters] FPGA in bioinformatics clusters (again?)]
James.P.Lux at jpl.nasa.gov
Sat Jan 14 15:56:51 PST 2006
At 08:52 AM 1/14/2006, Eugen Leitl wrote:
>'t see attached processing
>replacing clusters or HPC systems, Is.
>Joseph Landman, Ph.D
using masses of FPGAs is fundamentally different than uses masses of
computers in a number of ways:
1) The FPGA is programmed at a lower conceptual level if you really want to
get the benefit of performance. Sure, you can implement multiple PowerPC
cores on a Xilinx, and even run off the shelf PPC software on them,
however, it would be cheaper and faster to just get Power PCs.
There's a world of difference between modifying someone's program in
FORTRAN or C and modifying something in Verilog or VHDL. Fundamentally,
the FPGA is a bunch of logic gates, and not a sequential VonNeumann
computer with a single ALU. The usage model is different.
2) Most large FPGAs have very high bandwidth links available (RocketPortIO
for Xilinx for instance), although, they're hardly a commodity generic
thing with well defined high level methods of use (e.g. not like using
sockets). You're hardly likely to rack and stack FPGAs.
3) Hardware failure probabilities are probably comparable between FPGAs and
conventional CPUs. However, you're hardly likely to get the economies of
scale for FPGAs. Megagate FPGAs are in the multikilobuck range: just for
the chip. There isn't a well developed commodity mobo market for FPGAs,
that you can just pick your FPGA, and choose from among a half dozen boards
that are all essentially identical functionally. What "generic" FPGA
boards exist are probably in the multikilobuck area as well.
4) There are applications for which FPGAs excel, but it's likely that a
FPGA solution to that problem is going to be very tailored to that
solution, and not particularly useful for other problems. The FPGA may be
a perfectly generalized resource, but the system into which it is soldered
is not likely to be so.
Joe's analogy to video coprocessors is apt. Very specialized to a
particular need, where they achieve spectacular performances, especially in
a operations per dollar or operations per watt sense. However, they are
very difficult to apply in a generalized way to a variety of problems.
Of course, the video coprocessor is actually an ASIC, and is essentially
hardwired for a particular (set) of algorithms. You don't see many video
cards out there with an FPGA on them, for the reason that the
price/performance would not be very attractive.
(mind you, if you've got the resources, and a suitable small set of
problems you're interested in, developing ASICs is the way to go. D.E.Shaw
Research is doing just this for their computational chemistry problems.)
5) FPGA development tools are wretchedly expensive, compared to the tools
for "software". It's a more tedious, difficult and expensive development
There's a lot more software developers than FPGA designers out there, so
it's harder to find someone to do your FPGA. It's not really a dollar
issue (top pros in both are both in the $100K/yr salary range) it's finding
someone who's interested in working on YOUR project.
Sure, there are some basic "free-ish" tools out there for some FPGAs, but
if you want to do the equivalent of programming in a high level language,
you're going to be forking out $100K/yr in tools.
Resynthesizing your FPGA design is often a lot slower than recompiling your
program. Also, because it's basically a set of logic gates, there
typically isn't the concept of recompiling just part and relinking. It's
resynthesize EVERYTHING, and then you have to revalidate all the timings,
regenerate test vectors, etc. There's no equivalent of "patching".
James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
More information about the Beowulf