<HTML>
<HEAD>
<TITLE>Re: [Beowulf] Performance degrading</TITLE>
</HEAD>
<BODY>
<FONT FACE="Calibri, Verdana, Helvetica, Arial"><SPAN STYLE='font-size:11pt'><BR>
<BR>
<BR>
On 12/15/09 2:36 PM, "Gus Correa" <<a href="gus@ldeo.columbia.edu">gus@ldeo.columbia.edu</a>> wrote:<BR>
<BR>
</SPAN></FONT><BLOCKQUOTE><FONT FACE="Calibri, Verdana, Helvetica, Arial"><SPAN STYLE='font-size:11pt'>If you have single quad core nodes as you said,<BR>
then top shows that you are oversubscribing the cores.<BR>
There are five nwchem processes are running.<BR>
</SPAN></FONT></BLOCKQUOTE><FONT FACE="Calibri, Verdana, Helvetica, Arial"><SPAN STYLE='font-size:11pt'><BR>
<BR>
It has been a very long time, but wasn’t that normal behavior for mpich under certain instances? If I recall correctly it had an extra process that was required by the implementation. I don’t think it returned from MPI_Init, so you’d have a bunch of processes consuming nearly a full CPU and then one that was mostly idle doing something behind the scenes. I don’t remember if this was for mpich/p4 (with or without —with-comm=shared) or for mpich-gm.<BR>
<BR>
<BR>
<BR>
<BR>
-- <BR>
Glen L. Beane<BR>
Software Engineer<BR>
The Jackson Laboratory<BR>
Phone (207) 288-6153<BR>
<BR>
</SPAN></FONT>
</BODY>
</HTML>