[Beowulf] How to justify the use MPI codes on multicore systems/PCs?
Sabuj Pattanayek
sabujp at gmail.com
Sat Dec 10 12:48:51 PST 2011
Mallon, et. al., (2009) Performance Evaluation of MPI, UPC and OpenMP
on Multicore Architectures :
http://gac.udc.es/~gltaboada/papers/mallon_pvmmpi09.pdf
newer paper here, says to use a hybrid approach with openmp + mpi :
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.190.6479
HTH,
Sabuj
On Sat, Dec 10, 2011 at 2:21 PM, amjad ali <amjad11 at gmail.com> wrote:
> Hello All,
>
> I developed my MPI based parallel code for clusters, but now I use it on
> multicore/manycore computers (PCs) as well. How to justify (in some
> thesis/publication) the use of a distributed memory code (in MPI) on a
> shared memory (multicore) machine. I guess to explain two reasons:
>
> (1) Plan is to use several hunderds processes in future. So MPI like stuff
> is necessary. To maintain code uniformity and save cost/time for developing
> shared memory solution (using OpenMP, pthreads etc), I use the same MPI code
> on shared memory systems (like multicore PCs). MPI based codes give
> reasonable performance on multicore PCs, if not the best.
>
> (2) The latest MPI implementations are intelligent enough that they use some
> efficient mechanism while executing MPI based codes on shared memory
> (multicore) machines. (please tell me any reference to quote this fact).
>
>
> Please help me in formally justifying this and comment/modify above two
> justifications. Better if I you can suggent me to quote some reference of
> any suitable publication in this regard.
>
> best regards,
> Amjad Ali
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
More information about the Beowulf
mailing list