[Beowulf] How to justify the use MPI codes on multicore systems/PCs?
deadline at eadline.org
Sat Dec 10 14:04:43 PST 2011
Your question seems based on the assumption that shared memory
is always better than message passing on shared memory systems.
Though this seems like a safe assumption, it may not be true
in all cases:
of course it all depends on the compiler, the application, the hardware, ....
> Hello All,
> I developed my MPI based parallel code for clusters, but now I use it on
> multicore/manycore computers (PCs) as well. How to justify (in some
> thesis/publication) the use of a distributed memory code (in MPI) on a
> shared memory (multicore) machine. I guess to explain two reasons:
> (1) Plan is to use several hunderds processes in future. So MPI like stuff
> is necessary. To maintain code uniformity and save cost/time for
> shared memory solution (using OpenMP, pthreads etc), I use the same MPI
> code on shared memory systems (like multicore PCs). MPI based codes give
> reasonable performance on multicore PCs, if not the best.
> (2) The latest MPI implementations are intelligent enough that they use
> some efficient mechanism while executing MPI based codes on shared memory
> (multicore) machines. (please tell me any reference to quote this fact).
> Please help me in formally justifying this and comment/modify above two
> justifications. Better if I you can suggent me to quote some reference of
> any suitable publication in this regard.
> best regards,
> Amjad Ali
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
More information about the Beowulf