[Beowulf] SPEC MPI2007

Håkon Bugge Hakon.Bugge at scali.com
Thu Apr 26 10:01:16 PDT 2007


Tom,

As to the importance of a2a and variants, from the top of my mind:

    * STAR-CD (version 3) from CD-ADAPCO uses solely MPI_Alltoallv
    * IS (integer sort) from NAS kernels uses MPI_Alltoallv
    * LS-DYNA from Livermore Software, aprox. 25% 
of the time in the MPI library is consumed in 
MPI_Alltoall (32p, neon_refined_revised dataset)
    * Matrix transpose often uses MPI_Alltoall
so, yes, I am pretty convinced this is a hole which should be filled.



Håkon

At 17:23 26.04.2007, Tom Elken wrote:
>Hi Håkon,
>
>Since that presentation was written, the 18 
>benchmarks were pruned down to 13 for various 
>reasons, but none added, so there are still no 
>applications with MPI_Alltoall or MPI_Alltoallv 
>in the suite.  We didn't know about this lack until you pointed it out.
>
>It is unclear how important is this lack.  It 
>could be that applications writers try to avoid 
>these calls when trying to write scalable 
>applications.  Also the fact that 18 
>applications picked more or less at random 
>(while trying to find scalable, publicly 
>available applications from a wide range of 
>disciplines) did not use these calls, may mean 
>something.  There is one benchmark still in the 
>suite whose source contains MPI_Alltoall calls, 
>but the test cases we run do not use that function.
>
>But still, the HPG committee will entertain 
>adding such benchmarks for later versions of the benchmark suite.
>
>Thanks,
>Tom
>
>Håkon Bugge wrote:
>>  From the reference below, 18 benchmarks are 
>> characterized; viewgraph 15 and 16 show the 
>> "Message call count". Of the 18 benchmarks, 
>> none use MPI_Alltoall or MPI_Alltoallv. Now, 
>> if the suite has been amended with benchmarks 
>> using MPI_Alltoall, we're only lacking benchmarks using MPI_Alltoallv.
>>And to be precise, I do not search for said 
>>benchmarks personally; my intent was to 
>>underline that this dimension was missing from 
>>the 18 (initial) benchmarks in the MPI2007 suite.
>>
>>Thanks, Håkon
>
>>>you wrote:
>>>----------------
>>>I just read 
>>>http://www.spec.org/workshops/2007/austin/slides/SPEC_MPI2007.pdf
>
>
>--
>~~~~~~~~~~~~~~~~~~~       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>Tom Elken                 Manager, Performance Engineering
>tom.elken at qlogic.com      QLogic Corporation
>650.934.8056              System Interconnect Group
>

--
Håkon Bugge
CTO
dir. +47 22 62 89 72
mob. +47 92 48 45 14
fax. +47 22 62 89 51
Hakon.Bugge at scali.com
Skype: hakon_bugge

Scali - http://www.scali.com
Scaling the Linux Datacenter





More information about the Beowulf mailing list