[Beowulf] CPU shifts?? and time problems

amjad ali amjad11 at gmail.com
Wed Sep 2 05:14:15 PDT 2009


Hi All,
I have 4-Nodes ( 4 CPUs Xeon3085, total 8 cores) Beowulf cluster on ROCKS-5
with GiG-Ethernet. I tested runs of a 1D CFD code both serial and parallel
on it.
Please reply following:

1) When I run my serial code on the dual-core head node (or parallel code
with -np 1); it gives results in about 2 minutes. What I observe is that
"System Monitor" application show that some times CPU1 become busy 80+% and
CPU2 around 10% busy. After some time CPU1 gets share around 10% busy while
the CPU2 becomes 80+% busy. Such fluctuations/swap-of-busy-ness continue
till end. Why this is so? Does this busy-ness shifts/swaping harms
performance/speed?

2)  When I run my parallel code with -np 2 on the dual-core headnode only;
it gives results in about 1 minute. What I observe is that "System Monitor"
application show that all the time CPU1 and CPU2 remain busy 100%.

3)  When I run my parallel code with "-np 4" and "-np 8" on the dual-core
headnode only; it gives results in about 2 and 3.20 minutes respectively.
What I observe is that "System Monitor" application show that all the time
CPU1 and CPU2 remain busy 100%.

4)  When I run my parallel code with "-np 4" and "-np 8" on the 4-node (8
cores) cluster; it gives results in about 9 (NINE) and 12 minutes. What I
observe is that "System Monitor" application show CPU usage fluctuations
somewhat as in point number 1 above (CPU1 remains dominant busy most of the
time), in case of -np 4. Does this means that an MPI-process is shifting to
different cores/cpus/nodes? Does these shiftings harm performance/speed?

5) Why "-np 4" and "-np 8" on cluster is taking too much time as compare to
-np 2 on the headnode? Obviously its due to communication overhead! but how
to get better performance--lesser run time? My code is not too complicated
only 2 values are sent and 2 values are received by each process after each
stage.


Regards,
Amjad Ali.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20090902/147063ac/attachment.html>


More information about the Beowulf mailing list