Ethernet Bonding Performance under kernel 2.4.0

Jonathan Earle jearle at
Tue Jan 16 11:48:31 PST 2001

Hi all,

I've a system comprosed of two PIII machines, equipped with Znyx 346Q 4port
ethernet cards (tulip driver) which I'd like to connect together in a bonded
configuration.  For various reasons, we require 2.4.0 kernels on our
machines - currently we are using 2.4.0-test9.  

The setup is simple:  each port on a 346Q in one machine is connected to the
corresponding port on the 346Q in the other machine via a crossover cable.

	+-------+      +-------+
	|       |------|       |
 -----| Box A |------| Box B |-----
	|       |------|       |
      |       |------|       |
	+-------+      +-------+

Problem #1
Initally, after bootup, the performance of each of the four networks between
the two PCs is subpar.  Transfer rates will vary from a few hundred KB/s to
perhaps a few MB/s, and the transfer time is appreciably long.  This, on a
forced 100TX-FD link.  After a few minutes however, things appear to settle
down, and I can achieve 11.2MB/s when transferring a large binary file via
ftp (rate as reported by ncftp).  The de4x5 driver shows the same behaviour.

Problem #2
I built the bonding driver, and using a copy of ifenslave.c which I found
for kernel 2.3.50, I was able to make a bonded channel.  The trouble I found
is that the performance was not at all what I expected.  Using the first eth
port, I achieved a throughput (FTP transfer of a large binary file) of 10.4
MB/s (11.2MB/s if set to full duplex).  Using 2 ports, the performance
dropped to about 3.5MB/s.  Adding a third port brings the throughput to
about 5.2MB/s and adding the fourth port only takes it up to 5.75MB/s.

The de4x5 driver shows the same drop in performance as the tulip driver.

Using the TEQL (trivial link equalizer) (instructions from the Adv-routing
howto) provides the same measurements exactly.



More information about the Beowulf mailing list