joachim at sonne.lfbs.rwth-aachen.de
Sat May 4 01:23:29 PDT 2002
> Tony Skjellum wrote:
> > Lazy memory unlocking breaks correct programs that use memory
> > dynamically. It means that the programmer must program in a
> > restricted way with memory that is subject to send/receive.
> Why is that ? It's possible to implement lazy memory unlocking without
> imposing any constraint on the application.
You need to modify the OS for this, as I see it, MPI alone can not do
it correctly. Or just turn of paging... How does GM ensure un-pinning
when memory is free'd?
> The second problem is the egg/chicken kind of thing. The complex datatypes
> are not optimized very well in most MPI implementations. Well, nobody
> uses them because they are not optimized, and nobody will optimized them
> because nobody uses them. It's like collective communications in MPICH:
> suck, so if you want to run efficient code, you write yourself the
> you need. Then, the pressure on the MPICH team to optimize the
> collectives is
> not important.
Basically, you are right. But:
- you didn't sleep while I was talking at CAC'02, did you? ;-)
and http://www.lfbs.rwth-aachen.de/users/joachim/publications/ (at the top))
- MPICH has not-so-bad collectives, if you consider that these are generic
algorithms. Further optimization needs good knowledge of the underlying
interconnect characteristics and capabilities. I have done numerours
optimizations for collectives in SCI-MPICH, i.e. for MPI_Bcast
(see http://www.lfbs.rwth-aachen.de/users/joachim/SCI-MPICH/pcast.html )
> Do I spend time optimizing something that a tiny fraction of my users will
> effectively use or do I care about far more frequent poorly written
> applications ? It's a shame, I agree, but it's the trade-off all MPI
> implementations are playing with :-(
Well, for my thesis, that decision was obvious...
More information about the Beowulf