[Beowulf] multi-threading vs. MPI
Michael H. Frese
Michael.Frese at NumerEx.com
Tue Dec 11 16:21:36 PST 2007
Thanks for the results, and the link. In section 6.7 of the NAS
Parallel Benchmark
(<http://www.nas.nasa.gov/News/Techreports/1996/PDF/nas-96-010.pdf>NPB
2.1 Results Report, NAS-95-010 (PDF-213KB) on MPI, I found a
discussion of the Clustered-SMP issues discussed so far in this
thread. Its interesting that these issues discussed twelve years ago
are coming around again. La plus ca change..., I suppose.
In addition, there is a table of results in that section for an SGI
Power Challenge Array showing that idling processors on a given node
and using more nodes improves the speed per processor across four
different code kernels and two different problem sizes. This doesn't
tell us how a hybrid MP/MT application would work within a 4 core 2
CPU node, but it does hint that memory contention can be just as
nasty a problem as high latency message transmission.
Mike
At 12:52 PM 12/10/2007, you wrote:
>Some people had asked for more details:
>
>NAS suite version 3.2.1
>Test class was: B
>Units are Mops (Million operations per second)
>see the NAS docs for more information
>
>--
>Doug
>
>
> > I like answering these types of questions with numbers,
> > so in my Sept 2007 Linux magazine column (which should
> > be showing up on the website soon) I did the following.
> >
> > Downloaded the latest NAS benchmarks written in both
> > OpenMP and MPI. Ran them both on an 8 core Clovertown
> > (dual socket) system (multiple times) and reported
> > the following results:
> >
> > Test OpenMP MPI
> > gcc/gfortran 4.2 LAM 7.1.2
> > ------------------------------------
> > CG 790.6 739.1
> > EP 166.5 162.8
> > FT 3535.9 2090.8
> > IS 51.1 122.5
> > LU 5620.5 5168.8
> > MG 1616.0 2046.2
> >
> > My conclusion, it was a draw of sorts.
> > The article was basically looking at the
> > lazy assumption that threads (OpenMP) are
> > always better than MPI on a SMP machine.
> >
> > I'm going to re-run the tests using Harpertowns
> > real soon, maybe try other compilers and MPI
> > versions. It is easy to do. You can get the code here:
> >
> > http://www.nas.nasa.gov/Resources/Software/npb.html
> >
> > --
> > Doug
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >> On this list there is almost unanimous agreement that MPI is the way to
> >> go
> >> for parallelism and that combining multi-threading (MT) and
> >> message-passing
> >> (MP) is not even worth it, just sticking to MP is all that is necessary.
> >>
> >> However, in real-life most are talking and investing in MT while very
> >> few
> >> are interested in MP. I also just read on the blog of Arch Robison " TBB
> >> perhaps gives up a little performance short of optimal so you don't have
> >> to
> >> write message-passing " (here:
> >>
> http://softwareblogs.intel.com/2007/11/17/supercomputing-07-computer-environment-and-evolution/
> >> )
> >>
> >> How come there is almost unanimous agreement in the beowulf-community
> >> while
> >> the rest is almost unanimous convinced of the opposite ? Are we just
> >> tapping
> >> ourselves on the back or is MP not sufficiently dissiminated or ... ?
> >>
> >> toon
> >>
> >>
> >>
> >> _______________________________________________
> >> Beowulf mailing list, Beowulf at beowulf.org
> >> To change your subscription (digest mode or unsubscribe) visit
> >> http://www.beowulf.org/mailman/listinfo/beowulf
> >>
> >>
> >> !DSPAM:4759a800241507095717635!
> >>
> >
> >
> > --
> > Doug
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit
> > http://www.beowulf.org/mailman/listinfo/beowulf
> >
> > !DSPAM:475c325f61251246014193!
> >
>
>
>--
>Doug
>_______________________________________________
>Beowulf mailing list, Beowulf at beowulf.org
>To change your subscription (digest mode or unsubscribe) visit
>http://www.beowulf.org/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20071211/b5401829/attachment.html>
More information about the Beowulf
mailing list