[Beowulf] SPEC MPI2007
Håkon Bugge
Hakon.Bugge at scali.com
Thu Apr 26 07:02:59 PDT 2007
Hi Tom,
From the reference below, 18 benchmarks are
characterized; viewgraph 15 and 16 show the
"Message call count". Of the 18 benchmarks, none
use MPI_Alltoall or MPI_Alltoallv. Now, if the
suite has been amended with benchmarks using
MPI_Alltoall, we're only lacking benchmarks using MPI_Alltoallv.
And to be precise, I do not search for said
benchmarks personally; my intent was to underline
that this dimension was missing from the 18
(initial) benchmarks in the MPI2007 suite.
Thanks, Håkon
At 18:44 25.04.2007, Tom Elken wrote:
>Hi Håkon,
>
>you wrote:
>----------------
>I just read http://www.spec.org/workshops/2007/austin/slides/SPEC_MPI2007.pdf
>
>I am lacking applications using MPI_Alltoall and
>MPI_Alltoallv - these are important dimensions
>to evaluate. Anyone who knows about any suitable benchmark candidates?
>
>Thanks, Håkon
>----------------
>
>Thanks for the publicity on the forthcoming
>cluster-relevant benchmark-suite from SPEC. It
>is due to be launched at ISC'07 in Dresden in
>late June. I am on the SPEC HPG committee that
>is developing the SPEC MPI2007 benchmark suite,
>and we would welcome more members to participate
>in developing these benchmarks, including Scali :)
>
>Are you lacking "applications using MPI_Alltoall
>and MPI_Alltoallv" or are you pointing out their lack in MPI2007?
>
>Thanks for pointing this out to us. I did some
>searching and a molecular dynamics code named
>CPMD and a weather code named Hirvda (an
>"operational weatherforcast used by several
>weather centers in Europe") both use MPI_Alltoall extensively.
>
>I hope these applications will suit your
>needs. If others on the list can suggest more
>of these types of applications, that would be great.
>
>It is too late for adding candidate codes for
>the initial release of MPI2007 which ships with
>what is called "medium" datasets. There are
>plans for more scalable versions of the suite in
>the future with at least Large and perhaps XL
>datasets. With those releases there is the
>possibility to add more codes, and we will
>seriously consider codes that use MPI_Alltoall*.
>
>Thanks,
>Tom Elken
>QLogic Corporation
>
>
--
Håkon Bugge
CTO
dir. +47 22 62 89 72
mob. +47 92 48 45 14
fax. +47 22 62 89 51
Hakon.Bugge at scali.com
Skype: hakon_bugge
Scali - http://www.scali.com
Scaling the Linux Datacenter
More information about the Beowulf
mailing list