Channel Bonding and TEQL question

Somsak Sriprayoonsakul ssy at
Thu Feb 14 10:56:50 PST 2002

	Hello, I am building 4 nodes cluster, each with the following
CPU: Athlon 1700+
Memory: 512MB
M/B: Asus A7V
Network Card: 2x Compex Readylink 10/100 (RTL8139 driver).
	network switches are
Switch: 2x Compex 10/100mbps SRX1216A switch
	I am trying to connected all nodes together using Channel Bonding
TEQL to merged two ethernet cards into virtual one. The creation process
well but performance result is bad: Bandwith is lower compare to using
one network card!. Here is all details of what I am doing.

Platform: RedHat-7.2 with updated kernel 2.4.9-21 without any patch or
Benchmark Tester: IPERF from
Test method: I created bond0 device using command-line (not using
MASTER and SLAVE in network-scripts) by.
	On Machine #1:

	modprobe 8139too
	modprobe bonding
	ifconfig bond0 netmask up
	ifenslave bond0 eth0
	ifenslave bond0 eth1

	On  Machine #2:
	Did the same thing except IP is

	Sorry that I did not kept the result of ifconfig, but the outcome
seems ok. A little weird that eth1 also has the same MAC as eth0 and
And things is work out. I can telnet/ssh/rsh to other end. And the
led on both switch is beeping. So I ran IPERF.
	On Machine #2:
	iperf -s

	On Machine #1:
	iperf -c

	I try it about 5 times and the average bandwith is only about
Mbps. So I suspect the performance of each cards. But from IPERF the perf.
eth0 and eth1 is about 85-95Mbps.
	Next, I try TEQL but the performance is almost the same
Here are the commands.
	modprobe 8139too
	modprobe sch_teql
	tc qdisc add dev eth0 root teql0
	tc qdisc add dev eth1 root teql0
	ifconfig teql0 192.168.0.x netmask up

	Note that, There are some errors (about 5) and overrun (about 15)
packets on both eth0 and eth1 after heavy test. And I also tested all on
2.4.7-10 kernel comes with RedHat-7.2. What did I do wrong?


More information about the Beowulf mailing list