[Beowulf] Sharing an array in an MPI program?
Bogdan Costescu
Bogdan.Costescu at iwr.uni-heidelberg.de
Tue May 29 05:37:48 PDT 2007
On Fri, 25 May 2007, Jaime Perea wrote:
> One alternative that I like and it integrates well with mpi
> is the global arrays toolkit
>
> http://www.emsl.pnl.gov/docs/global/
I disagree with the "integrates well with mpi" part of the statement.
Global Arrays works on top of a communication layer called ARMCI which
only uses an MPI implementation for setting up the job and tearing it
apart (only calling MPI_Init and MPI_Finalize from what I remember),
the communication itself is done directly via lower level protocols
(TCP, GM, etc.) - I know because some years ago I wanted to use Global
Arrays on a SCore cluster and discovered that I have to port ARMCI to
PM (the low level communication protocol of SCore)... This sometimes
creates problems due to limitations imposed by the low level protocols
(for example, the MPI implementation would open a GM port while ARMCI
would need to open a second one, so the per-node limit of GM ports
would be reached much faster).
[ I don't want the above statement to sound negative towards Global
Arrays or ARMCI, my intention was only to bring to discussion a fact
that was missing. ]
--
Bogdan Costescu
IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: Bogdan.Costescu at IWR.Uni-Heidelberg.De
More information about the Beowulf
mailing list