[Beowulf] updated GPU-HMMer/mpiHMMer bits

Joe Landman landman at scalableinformatics.com
Sun Feb 8 21:11:42 PST 2009

This time compared to a loaner AMD shanghai 2.3 GHz (same one I did the 
rzf tests on a few weeks ago).

 From the mpihmmer mailing list:

A new release of GPU-HMMER is available at www.mpihmmer.org.  The most
notable change in the new code is support for multi-GPU systems.  We
have tested the current GPU-HMMER with up to 3 GPUs, and have achieved
over 100x speedup with sufficiently large HMMs.  A few bug fixes have
been applied as well, so I would encourage users to update.  Users who
update should be aware that several command line options have changed,
and should check the GPU-HMMER user guide for details

While the system requirements haven't changed from the last version,
users who intend to use multiple GPUs should be aware that they will
need a substantial amount of system memory in order to do so.  The 3 GPU
system I've been using has 16GB RAM.  This is probably a bit overkill,
but 8GB or so would probably be appropriate.

As always, any comments, bug reports, etc. are welcome.

best regards,
JP Walters


Updated mpiHMMer results were shown at SC08, the graph I saw showed 
maxing out about 180x over a single thread, though I think the parallel 
IO version can scale higher still.

Cudos to JP and the team for doing a great job on this!

Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615

More information about the Beowulf mailing list