[Beowulf] Bonding Ethernet cards / [was] 512 nodes Myrinet cluster Challenges

Thomas H Dr Pierce TPierce at rohmhaas.com
Mon May 8 04:55:07 PDT 2006


Dear John, et al.,

I could bond all the nodes in my cluster (Dell 1750s and Dell1850s with 
dual Gig ethernets on the motherboard). However, I cannot tell if I would 
get more bandwidth that way or just more ethernet packet resends because 
of out of order packets. 

Do you get improved performance on your "bonded" cluster?

Does  bonding two ethernet cards add  to the performance of an NFS beowulf 
cluster, with the master node being the NFS server? Or is bonding just a 
method to improve availability to the node if one ethernet card fails?

I cannot find any metrics on bonding or teaming ethernet cards. 
------
Sincerely,

   Tom Pierce
 


Message: 11
Date: Fri, 05 May 2006 10:08:47 +0100
From: John Hearns <john.hearns at streamline-computing.com>
Subject: Re: [Beowulf] 512 nodes Myrinet cluster Challanges
To: scheinin at crs4.it
Cc: beowulf at beowulf.org
Message-ID: <1146820127.6031.15.camel at Vigor13>
Content-Type: text/plain

On Fri, 2006-05-05 at 10:23 +0200, Alan Louis Scheinine wrote:
> Since you'all are talking about IPMI, I have a question.
> The newer Tyan boards have a plug-in IPMI 2.0 that uses
> one of the two Gigabit Ethernet channels for the Ethernet
> connection to IPMI.  If I use channel bonding (trunking) of the
> two GbE channels, can I still communicate with IPMI on Ethernet?

We recently put in a cluster with bonded gigabit, however that was done
using a separate dual-port PCI card.
On Supermicro, the IPMI card by default uses the same MAC address as the
eth0 port which it shares. You could reconfigure this I think.
(lan set 1 maccaddr <x:x:x:x:x:x>
Also Supermicro have a riser card00:30:48:2d:49:44
which provides a separate network and serial port for the IPMI card.
Tyan probably have similar.

***************************************

------
Sincerely,

   Tom Pierce
    Bldg 7/ Rm 207D - Spring House, PA
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20060508/c5dc10c8/attachment.html>


More information about the Beowulf mailing list