<br>
Hi guys,<br>
<br>
Greg, thanks for the link! It will no doubt take me a little while
to parse all the MPI2007 info (even though there are only a few submitted results
at the moment!), but one of the first things I noticed was that
performance of pop2 on the HP blade system was beyond atrocious... any
thoughts on why this is the case? I can't see any logical reason for
the scaling they have, which (being the first thing I noticed) makes me
somewhat hesitant to put much stock into the results at the moment. Perhaps this system is just a statistical blip on the radar which will fade into noise when additional results are posted, but until that time, it'd be nice to know why the results are the way they are.
<br>
<br>
To spell it out a bit, the reference platform is at 1 (ok, 0.994) on
16 cores, but then the HP blade system at 16 cores is at 1.94. Not bad
there. However, moving up we have:<br>
32 cores - 2.36<br>
64 cores - 2.02<br>
128 cores - 2.14<br>
256 cores - 3.62<br>
<br>
So not only does it hover at 2.x for a while, but then going from 128 -> 256 it gets a decent relative improvement. Weird.<br>
On the other hand, the Cambridge system (with the same processors and
a roughly similar interconnect, it seems) has the follow scaling from
32->256 cores:<br>
<br>
32 cores - 4.29<br>
64 cores - 7.37<br>
128 cores - 11.5<br>
256 cores - 15.4<br>
<br>
... So, I'm mildly confused as to the first results. Granted,
different compilers are being used, and presumably there are other
differences, too, but I can't see how -any- of them could result in the
scores the HP system got. Any thoughts? Anyone from HP (or QLogic) care
to comment? I'm not terribly knowledgeable about the MPI 2007 suite
yet, unfortunately, so maybe I'm just overlooking something.<br>
<br>
Cheers,<br>
- Brian<br>