[Beowulf] Parallel Programming Question
David N. Lombard
dnlombar at ichips.intel.com
Wed Jul 1 10:24:18 PDT 2009
On Wed, Jul 01, 2009 at 09:10:20AM -0700, Ashley Pittman wrote:
> On Fri, 2009-06-26 at 23:30 -0700, Bill Broadley wrote:
> > Keep in mind that when you say broadcast that many (not all) MPI
> > implementations do not do a true network layer broadcast... and that
> > in most
> > situations network uplinks are distinct from the downlinks (except for
> > the
> > ACKs).
A network layer broadcast can be iffy; not all switches are created equal.
> > If all clients need all input files you can achieve good performance
> > by either
> > using a bit torrent approach (send 1/N of the file to each of N
> > clients then
> > have them re-share it), or even just a simple chain. Head -> node A
> > -> node B
> > -> node C. This works better than you might think since Node A can
> > start
> > uploading immediately and the upload bandwidth doesn't compete with
> > the
> > download bandwidth (well not much usually).
GIYF. It will find existing implementations of application-level broadcasts
and file transfer pipelines.
> What you are recommending here is for Amjad to re-implement
> MPI_Broadcast() in his code which is something I would consider a very
> bad idea. The collectives are a part of MPI for a reason, it's a lot
> easier for the library to know about your machine than it is for you to
> know about it, having users re-code parts of the MPI library inside
> their application is both a waste of programmers time and is also likely
> to make the application run slower.
Isn't the use model important here? If the file is only needed for the one
run, I completely agree, do it directly in your MPI program using collectives.
If persistence of the file on the node has value, e.g., for multiple runs,
I'd get the file out to all the nodes using some existing package that
implements one of the methods Bill described. I wouldn't code one of those
using MPI.
--
David N. Lombard, Intel, Irvine, CA
I do not speak for Intel Corporation; all comments are strictly my own.
More information about the Beowulf
mailing list