<meta http-equiv="Content-Type" content="text/html; charset=utf-8"><meta name="ProgId" content="Word.Document"><meta name="Generator" content="Microsoft Word 11"><meta name="Originator" content="Microsoft Word 11"><link rel="File-List" href="file:///C:%5CDOCUME%7E1%5CMUHAMMAD%5CLOCALS%7E1%5CTemp%5Cmsohtml1%5C01%5Cclip_filelist.xml"><style>
<!--
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{mso-style-parent:"";
margin:0in;
margin-bottom:.0001pt;
mso-pagination:widow-orphan;
mso-hyphenate:none;
font-size:12.0pt;
font-family:"Times New Roman";
mso-fareast-font-family:"Times New Roman";
mso-fareast-language:AR-SA;}
@page Section1
{size:8.5in 11.0in;
margin:1.0in 1.25in 1.0in 1.25in;
mso-header-margin:.5in;
mso-footer-margin:.5in;
mso-paper-source:0;}
div.Section1
{page:Section1;}
-->
</style>
<p class="MsoNormal">Hi, all,</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">I am parallelizing a CFD 2D code in FORTRAN+OPENMPI. Suppose
that the grid (all triangles) is partitioned among 8 processes using METIS.
Each process has different number of neighboring processes. Suppose each
process has n elements/faces whose data it needs to sends to corresponding
neighboring processes, and it has m number of elements/faces on which it needs
to get data from corresponding neighboring processes. Values of n and m are
different for each process. Another aim is to hide the communication behind
computation. For this I do the following for each process:</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">DO j = 1 to n</p>
<p class="MsoNormal">CALL MPI_ISEND (send_data, num, type, dest(j), tag, <span style="">MPI_COMM_WORLD, ireq(j), ierr)</span></p>
<p class="MsoNormal"><span style="">ENDDO</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal">DO k = 1 to m</p>
<p class="MsoNormal">CALL MPI_RECV(recv_data, num, type, source(k), tag, <span style="">MPI_COMM_WORLD, status, ierr)</span></p>
<p class="MsoNormal"><span style="">ENDDO</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style="">This solves my
problem. But it gives memory leakage; RAM gets filled after few thousands of
iteration. What is the solution/remedy? How should I tackle this?</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style="">In another CFD code
I removed this problem of memory-filling by following (in that code n=m) : </span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal">DO j = 1 to n</p>
<p class="MsoNormal">CALL MPI_ISEND (send_data, num, type, dest(j), tag, <span style="">MPI_COMM_WORLD, ireq(j), ierr)</span></p>
<p class="MsoNormal"><span style="">ENDDO</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style="">CALL
MPI_WAITALL(n,ireq,status,ierr)</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal">DO k = 1 to n</p>
<p class="MsoNormal">CALL MPI_RECV(recv_data, num, type, source(k), tag, <span style="">MPI_COMM_WORLD, status, ierr)</span></p>
<p class="MsoNormal"><span style="">ENDDO</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style="">But this is not
working in current code; and the previous code was not giving correct results
with large number of processes.</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style="">Please suggest
solution.</span></p>
<p class="MsoNormal"><span style=""> </span></p>
<p class="MsoNormal"><span style=""> </span>THANKS A LOT FOR YOUR KIND
ATTENTION.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">With best regards,</p>
<p class="MsoNormal">Amjad Ali.</p>