[Beowulf] [kathleen@massivelyparallel.com: RE: [Bioclusters] FPGA in bioinformatics clusters (again?)]

Eugen Leitl eugen at leitl.org
Sat Jan 14 02:37:33 PST 2006


----- Forwarded message from Kathleen <kathleen at massivelyparallel.com> -----

From: Kathleen <kathleen at massivelyparallel.com>
Date: Fri, 13 Jan 2006 22:15:59 -0700
To: "'Clustering,  compute farming & distributed computing in life science informatics'" <bioclusters at bioinformatics.org>
Subject: RE: [Bioclusters] FPGA in bioinformatics clusters (again?)
X-Mailer: Microsoft Office Outlook, Build 11.0.5510
Reply-To: "Clustering,  compute farming & distributed computing in life science informatics" <bioclusters at bioinformatics.org>

Hi Larry,

You present some interesting information.  We've just seen in other types of
computationally intense applications, biometrics for instance, that we can
outperform FPGAs and at a significantly reduced price point.  It would be
great to test what we believe to be true in the bioinformatics arena.  As
for HMMR, GROMACS, AMBER and others being parallelized for sometime, I fully
concur.  We're parallelizing them using a methodology other than MPI and
what is currently in the market because we see several orders of magnitude
increased performance and scalability on COTS-based hardware for BLAST.  We
have no reason to believe we won't see similar results for other bioscience
applications, as we've seen similar results for other market applications,
such as pattern recognition, seismic processing and LIDAR.  We're choosing
popular open source applications because they are widely used, but if
someone has a proprietary solution they'd like to see scream and made
available as a hosted revenue producing solution or for internal use, we'd
be interested in talking.  I might add, that we may not even need to see
source code to parallelize.  

Cheers,

K



Kathleen wrote:
> TIA:
> 
> One thing to consider when using FPGAs for bioinformatics or any 
> complex computational life science application is whether or not the 
> FPGA supports cross coupled communication.

... which is needed in which algorithms?

> I believe FPGAs do not, therefore, FPGAs will be limited in 
> scalability, performance and very expensive.

FPGA performance on various algorithms is already 1 to 2 orders of magnitude
(e.g. 10 to 100 times) faster than single CPUs.  The price points of modern
FPGA boards with modern algorithms is about 2-4x single node pricing.  

As FPGAs improve (and they are), expect to achieve significantly more
performance.

[...]

> almost have HMMR and GROMACS parallelized and will be parallelizing 
> AMBER or some other MD code next.

Hmmm.... HMMer was parallelized using PVM quite some time ago, and using MPI
quite recently (with excellent results reported).  GROMACS also has been
parallel for a while.

Gromacs: http://www.gromacs.org/benchmarks/scaling.php
HMMer: http://hmmer.wustl.edu

Amber and many other MD codes have been parallelized for a while now...

> Kathleen Erickson
> Senior Marcom Strategist , Massively Parallel Technologies, Inc.

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452
cell : +1 734 612 4615
_______________________________________________
Bioclusters maillist  -  Bioclusters at bioinformatics.org
https://bioinformatics.org/mailman/listinfo/bioclusters




_______________________________________________
Bioclusters maillist  -  Bioclusters at bioinformatics.org
https://bioinformatics.org/mailman/listinfo/bioclusters

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820            http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://www.scyld.com/pipermail/beowulf/attachments/20060114/988e5a45/attachment.bin


More information about the Beowulf mailing list