I recall discussion of the hybrid approach, which I think most of the
list doesn't much like, but interested me on account of my application.
But I hadn't realized that hybrid was required by OpenMP for multi node
architectures. So yeah, I'll just go with MPI for starters. When I
start :-)<br>
Peter<br><br><div><span class="gmail_quote">On 6/26/08, <b class="gmail_sendername">Geoff Jacobs</b> <<a href="mailto:gdjacobs@gmail.com">gdjacobs@gmail.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Peter St. John wrote:<br> > Geoff,<br> > Oops! I totally misunderstood it. So it's strictly shared-memory, and<br> > requires something like MPI for crossing nodes. Gotcha. Big mistake, thanks.<br> > Peter<br>
<br> <br>Shared memory only, yes. Many, many people skip OpenMP completely and go<br> pure MPI. From a coding standpoint it's far easier to multiprocess using<br> one technique rather than two, and the performance gains for using both<br>
tend to be marginal or non-existent -- at least in my experience.<br> <br> There was a long discussion a while back on the list about the pros and<br> cons of each approach.<br> </blockquote></div><br>