[Beowulf] MPI application benchmarks
Mark Hahn
hahn at mcmaster.ca
Sun May 6 22:13:45 PDT 2007
> Sigh. I thought I could avoid that response. Our own code (due to the no.
> of users who all believe that their code is the most important and
> therefore must be benchmarked) is so massive that any potential RFP
> respondent would have to work a year to run the code. Thus, we have to
sure. the suggestion is only useful if the cluster is dedicated to
a single purpose or two. for anything else, I really think that
microbenchmarks are the only way to go. after all, your code probably
doesn't do anything which is truely unique, but rather is some
combination of a theoretical microbenchmark "basis set". no, I don't
know how to establish the factor weights, or whether this approach
really provides a good predictor. but isn't it the obvious way,
even the only tractable way?
>> Anyone who does something different for serious RFP purposes is playing
>> with their lives (at least in our surroundings - civil servants are
>> heavily watched as far as fraud / or attempted fraud in these case go).
I don't really understand this statement. no one is really going to
audit your decision and make you prove that you bought from the "correct"
vendor - you simply need to have a plausible rationale for the decision.
> E.g., we will almost certainly include gromacs (which still leaves the
> question of the input parameters, etc.).
that's what makes the "your own code" suggestion so uselessly narrow.
I'd be surprised if gromacs couldn't be persuaded (through varied
inputs and config) to prefer most any particular hardware: IB vs 10G,
x86_64 vs ia64 vs power, even more-cheaper-smaller vs fewer-fatter nodes.
this is, of course, complicated by the fact that some workloads use
5 MB/core, and others would like 6000x that much. the former are probably
serial, and the latter are probably not large-tight-mpi. I know of no
really good way to grok this in its fullness.
More information about the Beowulf
mailing list