[Beowulf] [kathleen at massivelyparallel.com: RE: [Bioclusters] FPGAin bioinformatics clusters (again?)]

Michael Will mwill at penguincomputing.com
Mon Jan 16 08:19:48 PST 2006

I have always been amazed at the promises of massivelyparallel. Now
technique is so good they don't even need the source code to

...but if I tell you how I would have to kill you...

Michael Will 

-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org]
On Behalf Of Eugen Leitl
Sent: Saturday, January 14, 2006 2:38 AM
To: Beowulf at beowulf.org
Subject: [Beowulf] [kathleen at massivelyparallel.com: RE: [Bioclusters]
FPGAin bioinformatics clusters (again?)]

----- Forwarded message from Kathleen <kathleen at massivelyparallel.com>

From: Kathleen <kathleen at massivelyparallel.com>
Date: Fri, 13 Jan 2006 22:15:59 -0700
To: "'Clustering,  compute farming & distributed computing in life
science informatics'" <bioclusters at bioinformatics.org>
Subject: RE: [Bioclusters] FPGA in bioinformatics clusters (again?)
X-Mailer: Microsoft Office Outlook, Build 11.0.5510
Reply-To: "Clustering,  compute farming & distributed computing in life
science informatics" <bioclusters at bioinformatics.org>

Hi Larry,

You present some interesting information.  We've just seen in other
types of computationally intense applications, biometrics for instance,
that we can outperform FPGAs and at a significantly reduced price point.
It would be great to test what we believe to be true in the
bioinformatics arena.  As for HMMR, GROMACS, AMBER and others being
parallelized for sometime, I fully concur.  We're parallelizing them
using a methodology other than MPI and what is currently in the market
because we see several orders of magnitude increased performance and
scalability on COTS-based hardware for BLAST.  We have no reason to
believe we won't see similar results for other bioscience applications,
as we've seen similar results for other market applications, such as
pattern recognition, seismic processing and LIDAR.  We're choosing
popular open source applications because they are widely used, but if
someone has a proprietary solution they'd like to see scream and made
available as a hosted revenue producing solution or for internal use,
we'd be interested in talking.  I might add, that we may not even need
to see source code to parallelize.  



Kathleen wrote:
> TIA:
> One thing to consider when using FPGAs for bioinformatics or any 
> complex computational life science application is whether or not the 
> FPGA supports cross coupled communication.

... which is needed in which algorithms?

> I believe FPGAs do not, therefore, FPGAs will be limited in 
> scalability, performance and very expensive.

FPGA performance on various algorithms is already 1 to 2 orders of
magnitude (e.g. 10 to 100 times) faster than single CPUs.  The price
points of modern FPGA boards with modern algorithms is about 2-4x single
node pricing.  

As FPGAs improve (and they are), expect to achieve significantly more


> almost have HMMR and GROMACS parallelized and will be parallelizing 
> AMBER or some other MD code next.

Hmmm.... HMMer was parallelized using PVM quite some time ago, and using
MPI quite recently (with excellent results reported).  GROMACS also has
been parallel for a while.

Gromacs: http://www.gromacs.org/benchmarks/scaling.php
HMMer: http://hmmer.wustl.edu

Amber and many other MD codes have been parallelized for a while now...

> Kathleen Erickson
> Senior Marcom Strategist , Massively Parallel Technologies, Inc.

Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452
cell : +1 734 612 4615
Bioclusters maillist  -  Bioclusters at bioinformatics.org

Bioclusters maillist  -  Bioclusters at bioinformatics.org

----- End forwarded message -----
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820            http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

More information about the Beowulf mailing list