[Beowulf] parallelization problem
amjad ali
amjad11 at gmail.com
Thu Aug 13 12:15:04 PDT 2009
Hi, all,
I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose that the grid
(all triangles) is partitioned among 8 processes using METIS. Each process
has different number of neighboring processes. Suppose each process has n
elements/faces whose data it needs to sends to corresponding neighboring
processes, and it has m number of elements/faces on which it needs to get
data from corresponding neighboring processes. Values of n and m are
different for each process. Another aim is to hide the communication behind
computation. For this I do the following for each process:
DO j = 1 to n
CALL MPI_ISEND (send_data, num, type, dest(j), tag, MPI_COMM_WORLD, ireq(j),
ierr)
ENDDO
DO k = 1 to m
CALL MPI_RECV(recv_data, num, type, source(k), tag, MPI_COMM_WORLD, status,
ierr)
ENDDO
This solves my problem. But it gives memory leakage; RAM gets filled after
few thousands of iteration. What is the solution/remedy? How should I tackle
this?
In another CFD code I removed this problem of memory-filling by following
(in that code n=m) :
DO j = 1 to n
CALL MPI_ISEND (send_data, num, type, dest(j), tag, MPI_COMM_WORLD, ireq(j),
ierr)
ENDDO
CALL MPI_WAITALL(n,ireq,status,ierr)
DO k = 1 to n
CALL MPI_RECV(recv_data, num, type, source(k), tag, MPI_COMM_WORLD, status,
ierr)
ENDDO
But this is not working in current code; and the previous code was not
giving correct results with large number of processes.
Please suggest solution.
THANKS A LOT FOR YOUR KIND ATTENTION.
With best regards,
Amjad Ali.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090814/abcb6a58/attachment.html>
More information about the Beowulf
mailing list