Dramatic slowdown for -np 2

LT V. H. Walke walke at usna.edu
Tue Feb 12 07:17:40 PST 2002


This behavior may not be surprising (depending on your hardware and
problem).  Going from one process to two processes incurs the additional
overhead of process creation and network interaction.  For a short
duration problem this overhead may overcome the benefit of added
processing power.  Continuing to add more processors adds processing
power with (hopefully) small additions of overhead.  

A simple PI calculation program on our cluster results in the following:

np	time	time*np
1	2.425852	2.425852
2	1.218453	2.436906
3	1.219797	3.659391
4	0.926145	3.704580
5	0.744015	3.720075
6	0.624169	3.745014
7	0.536246	3.753722
8	0.469895	3.759160
9	0.417963	3.761667
10	0.376397	3.763970
11	0.342171	3.763881
12	0.313797	3.765564
13	0.289680	3.765840
14	0.269325	3.770550
15	0.251672	3.775080
16	0.236275	3.780400
17	0.224224	3.811808

Our server node is an SMP machine and gets the first two processes. 
Note that adding the third process actually increases the execution time
(although only slightly).  The third process lives on another node and
has to be created and communicated with across the network.  As more
nodes are added, the total work performed (time*np) stays relatively
constant and the execution time steadily decreases.

Depending on the size of your problem, the amount of communication
required between processors, and the characteristics of your network,
the relative magnitude and trend of the overhead will differ from the
results shown.

Good luck,
Vann 


On Tue, 2002-02-12 at 00:58, J Harrop wrote:
> This may be an MPI problem, but I'm not sure so I'm posting it here and 
> comp.parallel.mpi
> 
> We are developing an application on a four node Beowulf while we wait for 
> the remaining nodes to arrive.  Speed-up has been close to predicted with 
> -np 3 and 4 in a master/slave mode.  But when I run at -np 2 the speed 
> drops to approximately 1/4 of the original serial application.  (On 4 nodes 
> - that is 1 master and 3 slaves, we get about 2.5 times speed-up relative 
> to the original application.)  All runs produce the same answer.
> 
> In the MPI code we have basic SEND, RECV, BCAST and REDUCE - nothing 
> fancy.  Does anyone know if any of these or other MPI functions run into 
> problems with one to one master slave ration?  Any other enlightenment 
> would be welcome.
> 
> Cheers,
> 
> John Harrop
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
-- 
----------------------------------------------------------------------
  Vann H. Walke                        Office: Chauvenet 341
  Computer Science Dept.               Ph:  410-293-6811
  572 Holloway Road, Stop 9F           Fax: 410-293-2686
  United States Naval Academy          email: walke at usna.edu
  Annapolis, MD 21402-5002             http://www.cs.usna.edu/~walke
----------------------------------------------------------------------




More information about the Beowulf mailing list