[Beowulf] Re: the comparison between OpenMP and MPI

Peter St. John peter.st.john at gmail.com
Mon May 21 08:24:03 PDT 2007


Rich,
Two things; the small and obvious point, "cheapest" isn't the only
motivation for open source, but you know that.
What surprised me was "...can handle more complex codes" and "...to compile
correctly". By compiling correctly, you mean, achieving the desired
performance characteristics for the target executable? In my experience
compilers are reliably logically correct. I once tracked a bug to the
symbolic debugger :-), but never to the compiler itself (although I've
always been able to use mature compilers). Compiler writers pretty much
define "language law". (And I'm sure the ones at Intel are just as proud as
the ones at IBM and CMU.)
As for complexity, I've written things that exceeded the available stack
depth, but really I don't understand a program being too complex for a
compiler. Too long, sure. Everything has resource limitations. But not too
complex. So I'd be very amused to see some examples, maybe of local
complexity, I wouldn't be able to read the 100k lines of fortran myself :-)
Peter


On 5/18/07, Rich Altmaier <richa at sgi.com> wrote:
>
> Hi, I strongly suggest you slightly violate your desire for freeware
> tools.
> After all, getting good data and efficient use of your time can be
> more valuable than finding the cheapest tools.
> Our hands-on experience with many codes suggests
> the Intel tool set can handle far more complex codes than
> open source compilers.
> When you need 100k lines of Fortran to compile correctly,
> you won't find an open source answer, in my opinion.
>
> http://www.intel.com/cd/software/products/asmo-na/eng/index.htm
> Take a look at the compiler, Vtune, libraries, thread analysis tools,
> and cluster tools.  Intel's delivery of software developer tools
> here is very strong.  The compiler supports OpenMP.
> For the MPI library, probably you should go with MVAPICH,
> http://mvapich.cse.ohio-state.edu/
> Presently I see mvapich as strong for bandwidth and latency, for
> a large number of nodes.
>
> In your comparison, try this: once you have an optimal MPI code,
> convert it back to OpenMP and see how the balance between compute
> and communication can act in your favor.
>
> Just FYI,
> Rich Altmaier, SGI
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20070521/2bd2b99a/attachment.html>


More information about the Beowulf mailing list