[Beowulf] MPI application benchmarks

Brian D. Ropers-Huilman brian.ropers.huilman at gmail.com
Sat May 5 10:10:03 PDT 2007

On 5/4/07, Martin Siegert <siegert at sfu.ca> wrote:
> We will be purchasing a shared cluster for a wide community (currently
> more than 1000 users). Thus, the common response on this list to evaluate
> hardware - "use your own application as benchmark" - does not work:
> users change, users' applications change, etc., etc. Thus, I need a
> benchmark suite that tests a wide spectrum of properties.

My answer is still to "use your own application(s)." Poll your users
and find out what they have and what they are going to run. Find some
who already have codes that scale well (>1000 cores) and ask them to
participate. Many vendors will allow you to run your own codes on
systems they have at their own sites before you decide to purchase.
These vendor-hosted systems are typically only 256 cores or less, but
it gives you some idea as to how your codes might run.

I also suggest picking some representative synthetic benchmarks to
test floating point and integer operations, memory bandwidth, MPI
ping-pongs (the SPEC MPI2007, among others, would fit here), the HPC
Challenge codes, and the like. Many sites will then take all of these
results (synthetic + their own applications) and aggregate the
results, possibly with weighting factors, into a single number.

If you do this over a number of years and number of systems, with the
same benchmarks, you can even start to normalize against a "base"
system and take things like different core counts and costs into

Brian D. Ropers-Huilman, Director
Systems Administration and Technical Operations
Supercomputing Institute                           <bropers at msi.umn.edu>
599 Walter Library                                   +1 612-626-5948 (V)
117 Pleasant Street S.E.                             +1 612-624-8861 (F)
University of Minnesota                               Twin Cities Campus
Minneapolis, MN 55455-0255                       http://www.msi.umn.edu/

More information about the Beowulf mailing list