[Beowulf] Programming Help needed

Larry Stewart stewart at serissa.com
Sat Nov 7 03:11:19 PST 2009


On Fri, Nov 6, 2009 at 5:43 PM, amjad ali <amjad11 at gmail.com> wrote:

> Hi all,
>
>
> IMPORTANT: Secondly, if processor A shares 50 faces (on 50 or less
> elements) with an another processor B then it sends/recvs 50 different
> messages. So in general if a processors has X number of faces sharing with
> any number of other processors it sends/recvs that much messages. Is this
> way has "very much reduced" performance in comparison to the possibility
> that processor A will send/recv a single-bundle message (containg all
> 50-faces-data) to process B. Means that in general a processor will only
> send/recv that much messages as the number of processors neighbour to it.
> It will send a single bundle/pack of messages to each neighbouring
> processor.
> Is their "quite a much difference" between these two approaches?
>
>
It is probably faster to send a single message with all the data, rather
than fifty messages, especially if each item is small.  However, you don't
have to guess.  Just create a small test
program and use MPI_WTIME to measure how long the two cases take.

The usual way to do timing measurements that gets decent results is to
measure the time for one iteration of the two cases, then measure the time
for two iterations, then 4, then 8, and so on
until the time for a run exceeds one second.

The issues that make it likely that one big message is faster than 50 small
ones are that copying the data into a single message on a modern processor
will be much faster than sending the bits over ethernet, and that each
message has a certain overhead, which is probably large compared to the copy
and transmission time of a small datum.

If you will be writing MPI programs for various problems, it might be useful
to download and run something like the Intel MPI Tests, that will give you
performance figures for the various MPI operations and give you a feel for
how expensive different things are on your system.

-L
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20091107/4dd28168/attachment.html>


More information about the Beowulf mailing list