[Beowulf] Bonding Ethernet cards / [was] 512 nodes Myrinet cluster Challenges
eugen at leitl.org
Thu May 11 05:13:04 PDT 2006
On Wed, May 10, 2006 at 09:11:43AM +0100, John Hearns wrote:
> This is a very common configuration for our clusters.
> Most motherboards these days come with dual on-board gigabit.
> One is used for general cluster traffic and NFS.
This is also useful for vanilla systems. I use dual NICs
in all of my racked machines. One NIC listens to world-visible
(with a software firewall) addresses (using a Level 3 Ethernet
switch), while the second one are connected to a dumb but cheap GBit
Ethernet switch running a private (e.g. 10.0.0.0/24) network,
serving NFS and similiar. With IMPI SMDC boards (btw, the Sun Fire
X2100 despite being only IPMI 1.5 can do remote BIOS and grub as
well as Linux boot messages via a syntax like
ipmitool -H 10.0.0.7 -P yourpasswordhere -e '!' -g -U Admin -I lan -v -v tsol
I intend to put IPMI on a different network (10.0.1.0/24, most likely)
in future, still binding it to the second NIC.
As my next project, I intend to try MTU 9000 with NFS (might be
time for a new switch if the old one doesn't do jumbo frames),
and to check out pvfs2 (current HA NFS via drbd has quite
awful performance, for multiple reasons).
> the other is dedicated to MPI traffic, using a separate switch or
> stack of switches.
> (The normal MPI implementation is the low-latency SCore)
> On machines with separate service processors (for example Sun Galaxy)
> we put in Netgear 10/100 switches and cabling for a management network.
> With Serial Over LAN we don't have any need for serial console cabling
> any more.
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 191 bytes
Desc: Digital signature
More information about the Beowulf