<div>I just want to mention (not being a sysadmin professionally, at all) that you could get exactly this result if something were assigning IP addresses sequentially, e.g.</div>
<div>node1 = foo.bar.1</div>
<div>node2 = foo.bar.2</div>
<div>...</div>
<div>and something else had already assigned 13 to a public thing, say, a webserver that is not open on the port that MPI uses.</div>
<div>I don't know nada about addressing a CPU within a multiprocessor machine, but if it has it's own IP address then it could choke this way.</div>
<div> </div>
<div>Peter<br><br> </div>
<div><span class="gmail_quote">On 3/14/07, <b class="gmail_sendername">Joshua Baker-LePain</b> <<a href="mailto:jlb17@duke.edu">jlb17@duke.edu</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">I have a user trying to run a coupled structural thermal analsis using<br>mpp-dyna (mpp971_d_7600.2.398). The underlying OS is centos-4 on x86_64
<br>hardware. We use our cluster largely as a COW, so all the cluster nodes<br>have both public and private network interfaces. All MPI traffic is<br>passed on the private network.<br><br>Running a simulation via 'mpirun -np 12' works just fine. Running the
<br>same sim (on the same virtual machine, even, i.e. in the same 'lamboot'<br>session) with -np > 12 leads to the following output:<br><br>Performing Decomposition -- Phase 3 03/12/2007<br>11:47:53<br><br><br>
*** Error the number of solid elements 13881<br>defined on the thermal generation control<br>card is greater than the total number<br>of solids in the model 12984<br><br>*** Error the number of solid elements 13929<br>defined on the thermal generation control
<br>card is greater than the total number<br>of solids in the model 12985<br>connect to address $ADDRESS: Connection timed out<br>connect to address $ADDRESS: Connection timed out<br><br>where $ADDRESS is the IP address of the *public* interface of the node on
<br>which the job was launched. Has anybody seen anything like this? Any<br>ideas on why it would fail over a specific number of CPUs?<br><br>Note that the failure is CPU dependent, not node-count dependent.<br>I've tried on clusters made of both dual-CPU machines and quad-CPU
<br>machines, and in both cases it took 13 CPUs to create the failure.<br>Note also that I *do* have a user writing his own MPI code, and he has no<br>issues running on >12 CPUs.<br><br>Thanks.<br><br>--<br>Joshua Baker-LePain
<br>Department of Biomedical Engineering<br>Duke University<br>_______________________________________________<br>Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>To change your subscription (digest mode or unsubscribe) visit
<a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a><br></blockquote></div><br>