[Beowulf] MPI_reduce roundoff question.
Ashley Pittman
apittman at concurrent-thinking.com
Thu Jul 12 12:06:54 PDT 2007
On Tue, 2007-07-10 at 21:13 -0400, Stern, Michael (NIH/NIA/IRP) [E]
wrote:
> Hi,
> I'm a user of the NIH Biowulf cluster, and I'm looking for an answer
> to a question that the staff here didn't know. I'm aware that
> roundoff error in MPI_reduce can vary with the number of processors,
> because the order of summation depends on the communication path. My
> question is whether the order of summation can differ among different
> calls to MPI_reduce within the same program (with the same number of
> processors during a single run).
An interesting question. Technically the answer is yes it can according
to the letter of the MPI spec although the spec also advises vendors
that it shouldn't differ regardless of the layout of the processes.
In practice I'd be very surprised if two calls to MPI_Reduce on the same
communicator with the same values would produce different results
although you may find that communicators with the same size but
different process layout within the same job gave you slightly different
answers if shared memory optimisations have been employed.
I'm aware this has been a issue for people in the past and I've seen
procurement contracts which state that any results obtained by the
computer must be 100% repeatable which in effect means the answer to
your question is no, they cannot differ.
Ashley,
More information about the Beowulf
mailing list