[Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow tostart up, sometimes not at all

Bill Bryce bill at platform.com
Wed Oct 4 09:31:51 PDT 2006


Hi Matt, 

You pretty much diagnosed our problem correctly.  After discussing with
the customer and a few more engineers here we found that the python code
was very slow at starting the ring.  Seems to be a common problem with
MPD startup on other MPI implementations as well (I could be wrong
though).  We also modified the recvTimeout since onsite engineers
suspected that would help as well.  The final fix we are working on is
starting the MPD with the batch system and not relying on ssh - the
customer does not want a root MPD ring and wants one per job so the
batch system will do this for us.

Bill.


-----Original Message-----
From: M J Harvey [mailto:m.j.harvey at imperial.ac.uk] 
Sent: Wednesday, October 04, 2006 12:23 PM
To: Bill Bryce
Cc: beowulf at beowulf.org
Subject: Re: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow
tostart up, sometimes not at all

Hello,

> We are going through a similar experience at one of our customer
sites.
> They are trying to run Intel MPI on more than 1,000 nodes.  Are you
> experiencing problems starting the MPD ring?  We noticed it takes a
> really long time especially when the node count is large.  It also
just
> doesn't work sometimes.

I've had similar problems with slow and unreliable startup of the Intel 
mpd ring. I noticed that before spawning the individual mpds, it 
connects to each node and checks the version of the installed python 
(function getversionpython() in mpdboot.py). On my cluster, at least, 
this check was very slow (not to say pointless). Removing it 
dramatically improved startup time - now it's merely slow.

Also, for jobs with large process counts, it's worth increasing 
recvTimeout in mpirun from 20 seconds. This value governs the amount of 
time mpirun waits for the secondary mpi processes to be spawned by the 
remote mpds and the default value is much too aggressive for large jobs 
started via ssh.

Kind Regards,

Matt





More information about the Beowulf mailing list