[Beowulf] MPI2007 out - strange pop2 results?

Kevin Ball kevin.ball at qlogic.com
Thu Jul 19 11:51:54 PDT 2007


Hi Brian,

   The benchmark 121.pop2 is based on a code that was already important
to QLogic customers before the SPEC MPI2007 suite was released (POP,
Parallel Ocean Program), and we have done a fair amount of analysis
trying to understand its performance characteristics.  There are three
things that stand out in performance analysis on pop2.

  The first point is that it is a very demanding code on the compiler. 
There has been a fair amount of work on pop2 by the PathScale compiler
team, and the fact that the Cambridge submission used the PathScale
compiler while the HP submission used the Intel compiler accounts for
some (the serial portion) of the advantage at small core counts, though
scalability should not be affected by this.

  The second point is that pop2 is fairly demanding of IO.  Another
example to look at for this is in comparing the AMD Emerald Cluster
results to the Cambridge results;  the Emerald cluster is using NFS over
GigE from a single server/disk, while Cambridge has a much more
optimized IO subsystem.  While on some results Emerald scales better,
for pop2 it scales only from 3.71 to 15.0 (4.04X) while Cambridge scales
from 4.29 to 21.0 (4.90X).  The HP system appears to be using NFS over
DDR IB from a single server with a RAID;  thus it should fall somewhere
between Emerald and Cambridge in this regard.

  The first two points account for some of the difference, but by no
means all.  The final one is probably the most crucial.  The code pop2
uses a communication pattern consisting of many small/medium sized
(between 512 bytes and 4k) point to point messages punctuated by
periodic tiny (8b) allreduces.  The QLogic InfiniPath architecture
performs far better in this regime than the Mellanox InfiniHost
architecture.

  This is consistent with what we have seen in other application
benchmarking;  even SDR Infiniband based off of the QLogic InfiniPath
architecture performs in general as well as DDR Infiniband based on the
Mellanox InfiniHost architecture, and in some cases better.


Full disclosure:  I work for QLogic on the InfiniPath product line.

-Kevin


On Wed, 2007-07-18 at 18:50, Brian Dobbins wrote:
> Hi guys,
> 
>   Greg, thanks for the link!  It will no doubt take me a little while
> to parse all the MPI2007 info (even though there are only a few
> submitted results at the moment!), but one of the first things I
> noticed was that performance of pop2 on the HP blade system was beyond
> atrocious... any thoughts on why this is the case?  I can't see any
> logical reason for the scaling they have, which (being the first thing
> I noticed) makes me somewhat hesitant to put much stock into the
> results at the moment.  Perhaps this system is just a statistical blip
> on the radar which will fade into noise when additional results are
> posted, but until that time, it'd be nice to know why the results are
> the way they are. 
> 
>   To spell it out a bit, the reference platform is at 1 (ok, 0.994) on
> 16 cores, but then the HP blade system at 16 cores is at 1.94.  Not
> bad there.  However, moving up we have:
>   32 cores   - 2.36
>   64 cores  -  2.02
>  128 cores -  2.14
>  256 cores -  3.62
> 
>   So not only does it hover at 2.x for a while, but then going from
> 128 -> 256 it gets a decent relative improvement.  Weird.
>   On the other hand, the Cambridge system (with the same processors
> and a roughly similar interconnect, it seems) has the follow scaling
> from 32->256 cores:
> 
>    32 cores - 4.29
>    64 cores - 7.37
>   128 cores - 11.5
>   256 cores - 15.4
> 
>   ... So, I'm mildly confused as to the first results.  Granted,
> different compilers are being used, and presumably there are other
> differences, too, but I can't see how -any- of them could result in
> the scores the HP system got.  Any thoughts?  Anyone from HP (or
> QLogic) care to comment?  I'm not terribly knowledgeable about the MPI
> 2007 suite yet, unfortunately, so maybe I'm just overlooking
> something.
> 
>   Cheers,
>   - Brian
> 
> 
> ______________________________________________________________________
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list