[Beowulf] Sharing an array in an MPI program?

Eric Thibodeau kyron at neuralbs.com
Sun May 27 22:09:24 PDT 2007


You might want to take a look at the openMPI implementations. They have all sorts of neat tricks wrt detecting local Vs remote memory access and there might be native MPI ways of optimising your memory usage... I am not talking from experience but stating that I wouldn't be surprised if they had already implemented this really "nice to have" feature in their latest MPI release.

Le jeudi 24 mai 2007 11:05, Tahir Malas a écrit :
> Hi all,
> We have an 8-node cluster of SMP nodes, which have dual-quad core
> processors. The network is Infiniband. Each process in our parallel FORTRAN
> 90 program holds an identical array that is used in all parts of the
> program. However, when the size of the problem gets larger and larger, this
> memory cost has started to become a memory bottleneck for us.
> If all 8 processes in the same node could just read from the same memory
> instead of holding their arrays, we would have significant memory gain. This
> would be natural in a node if were to use OpenMP, but I wonder whether this
> is somehow possible with only MPI? Distributing this array among the
> processes is too expensive for us. We also know that passing to hybrid
> programming (MPI + OpenMP) is a choice, but we look for simpler choices for
> the time being.
> Thanks,
> Tahir Malas
> Bilkent University 
> Electrical and Electronics Engineering Department
> Phone: +90 312 290 1385 
> 
> 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 
> 

-- 
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517




More information about the Beowulf mailing list