[Beowulf] Re: Re: Home beowulf - NIC latencies
Philippe Blaise
philippe.blaise at cea.fr
Tue Feb 15 00:52:53 PST 2005
Mikhail Kuzminsky wrote:
>
> Let me ask some stupid's question: which MPI implementations allow
> really
>
> a) to overlap MPI_Isend w/computations
> and/or b) to perform a set of subsequent MPI_Isend calls faster than
> "the same" set of MPI_Send calls ?
>
Dear Mikhail,
sorry if it's not a direct answer to your question, but it could help.
There is a potential difficulty when you try to overlap MPI_Isend with
some computations :
generally you do it on a cluster of SMP machines and the performance of
the overlapping
should depend a lot on the placement of the processes on the SMP nodes.
On one hand if some of the pair processes that do the MPI_Isend / Irecv
are on the same node, you
won't be able to overlap communications with computations, but of course
the communications should be faster
for large messages using shared memory than using the NIC.
On the other hand if the pair processesses are on different nodes, for
large messages the communication time using
the NIC is larger than the time for doing the same communication using
shared memory, but of course if your NIC
(like the quadrics one for example) is able to do some overlap you will
save some time.
Quadrics (again, but may be it's true for other network technologies)
provide a way to use the NIC even for
the intra-node communication ; but as a consequence you will share the
NIC for intra and inter nodes communications
together and the potential benefit is not so clear.
So don't expect too much by overlapping communication with computation :
it's very hard to tune, it depends a
lot on the placement of your program on the SMP nodes, the NIC
functionnalities, and the scheme you use for the
communications !
If you have enough time, you could have a look to another approach by
using a mixed OpenMP/MPI
programming scheme.
Regards,
Phil.
More information about the Beowulf
mailing list