[Beowulf] MPI2007 out - strange pop2 results?
Brian Dobbins
bdobbins at gmail.com
Wed Jul 18 18:50:50 PDT 2007
Hi guys,
Greg, thanks for the link! It will no doubt take me a little while to
parse all the MPI2007 info (even though there are only a few submitted
results at the moment!), but one of the first things I noticed was that
performance of pop2 on the HP blade system was beyond atrocious... any
thoughts on why this is the case? I can't see any logical reason for the
scaling they have, which (being the first thing I noticed) makes me somewhat
hesitant to put much stock into the results at the moment. Perhaps this
system is just a statistical blip on the radar which will fade into noise
when additional results are posted, but until that time, it'd be nice to
know why the results are the way they are.
To spell it out a bit, the reference platform is at 1 (ok, 0.994) on 16
cores, but then the HP blade system at 16 cores is at 1.94. Not bad there.
However, moving up we have:
32 cores - 2.36
64 cores - 2.02
128 cores - 2.14
256 cores - 3.62
So not only does it hover at 2.x for a while, but then going from 128 ->
256 it gets a decent relative improvement. Weird.
On the other hand, the Cambridge system (with the same processors and a
roughly similar interconnect, it seems) has the follow scaling from 32->256
cores:
32 cores - 4.29
64 cores - 7.37
128 cores - 11.5
256 cores - 15.4
... So, I'm mildly confused as to the first results. Granted, different
compilers are being used, and presumably there are other differences, too,
but I can't see how -any- of them could result in the scores the HP system
got. Any thoughts? Anyone from HP (or QLogic) care to comment? I'm not
terribly knowledgeable about the MPI 2007 suite yet, unfortunately, so maybe
I'm just overlooking something.
Cheers,
- Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20070718/ef2b0e00/attachment.html>
More information about the Beowulf
mailing list