<br><br><div class="gmail_quote">2008/11/11 Alcides Simao <span dir="ltr"><<a href="mailto:alsimao@gmail.com">alsimao@gmail.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hello all!<br><br>I've heard that there are some motherboards, if I can recall I believe it's Intel, that make use of a Yukon driver, that happens not to work well under Linux, and hence, it is a serious problem for Beowulfing.<br>
<br></blockquote><div><br>I think this has been covered on the Beowulf list before.<br>Think seriously about getting hold of a set of separate Intel Pro-1000 network cards if you are going to run MPI over Ethernet.<br>
The Intel drivers are well developed, and the cards perform well. By all means run your cluster management and NFS storage over the on-board chipsets, but you may find it wort the extra expense to have separate NICs for the MPI.<br>
<br>I agree with the point abotu the Marvell driver - I recall a session I ahd with a system at University of Newcastle. The external connection was to their campus LAN - which was a 100Mbps connection to a Cisco switch. In our lab, the external connection ran just run on a gig E connection to Nortel. We piled gbytes of data up and down it. But connect to a slow LAN - and it stops after 20 minutes. the cause being explicit congestion notification packets. I COULD have spent time updating the driver etc. etc.<br>
But I took the road of fitting a PCI-e Intel card and configuring that up with the external IP Address. Worked fine.<br>I hate to say it, but it depends on how much you value your time.<br><br></div></div><br>