<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2600.0" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>I have experienced problem getting channel bonding
to work using the intel supplied drivers (the only available drivers that I am
aware of) and the current version of the bonding module and ifenslave
utility. It seems to stem from a an inability to set the MAC address of
2nd and 3rd interfaces.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Note: I realize the performance gain is negligable
with gigabit cards, but it is required by our setup. We are trying to
connect our main node using gigabit copper to the child nodes which use
100mbit. They are connected through a 48port 100mbit switch
that has a gigabit module installed. We want to increase
our performance by channel bonding, something which has already been tested
and provides a good performance increase based on the minimal cost of
duplicating the network hardware. We would prefer not to have to
revert from gigabit to 100mbit for the head node. We also hope
this will avoid some bottlenecks where a lot of child nodes are
attempting to communicate with the main node.
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Thanks in advance.</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2>Sincerely,</FONT></DIV>
<DIV><FONT face=Arial size=2> Gordon Gere</FONT></DIV>
<DIV><FONT face=Arial size=2> ROCIT System Admin</FONT></DIV>
<DIV><FONT face=Arial size=2> (<A
href="http://vivaldi.chem.uwo.ca/rocit/">http://vivaldi.chem.uwo.ca/rocit/</A>)</FONT></DIV></FONT></DIV></BODY></HTML>