[Beowulf] bonding and bandwidth

Michael T. Prinkey mprinkey at aeolusresearch.com
Tue Jun 8 09:06:09 PDT 2004


Hi Jean-Marc,

What nics are you using? (if onboard, what motherboard?)  Are the nics on
separate PCI buses?  Are the PCI bus(es) 32- or 64-bit?  Are they running
at 33 or 66 MHz or faster?  Have you tried using the tg3 driver instead?

Depending on driver and PCI bus issues, your throughput can be limited by 
PCI bus.  Also, I think that some of the gigabit ethernet drivers have 
problems with bonding due to the interrupt grouping or some such.

Quick googling found this:

http://www.scl.ameslab.gov/Projects/MP_Lite/dox_channel_bonding.html

It seems to be a known problem.

Mike Prinkey
Aeolus Research, Inc.


On Mon, 7 Jun 2004, Jean-Marc Larré wrote:

> Hi all,
> 
> I'm testing bonding.c with two gigabit ethernet links toward a HP 2848 
> switch. My kernel is 2.4.24 from kernel.org on RedHat 9.0
> modules.conf is like this :
> [root at node01 root]# cat /etc/modules.conf
> alias eth1 bcm5700
> alias eth2 bcm5700
> alias bond0 bonding
> options bond0 miimon=100 mode=balance-alb updelay=50000
> 
> My problem :
> I get a bandwidth around 900Mbit/s and not 1800Mbit/s with netperf or 
> iperf or NetPipe. Could you explain me why I'm not get 1800Mbit/s and 
> where is my problem ?
> 
> Thank you
> Sincerely.
> Jean-Marc
> 




More information about the Beowulf mailing list