Multiple ethernet cards
Robert G. Brown
rgb@phy.duke.edu
Mon Oct 19 00:08:25 1998
On Sun, 18 Oct 1998 smwong@cse.cuhk.edu.hk wrote:
> Hi tulip'ers,
>
> > I'm trying to get both a 3Com (ISA) and LinkSys (PCI) card working.
> > Thanks to the previous responses, I upgraded to v0.89 of tulip.c
> > and have the LinkSys card working. Each card works independently
> > but the 3Com card is inactive with the LinkSys card inserted.
> >
> > Ethernet entries from "ifconfig -a":
> >
> > eth0 Link encap:Ethernet HWaddr 00:A0:CC:24:33:86
> > inet addr:137.132.75.154 Bcast:137.132.75.255 Mask:255.255.255.0
> > Interrupt:10 Base address:0xe800
> >
> > eth1 Link encap:Ethernet HWaddr 00:A0:24:2F:1C:5D
> > inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
> > Interrupt:10 Base address:0x300
>
> Well, I don't know sharing interrupt between ISA & PCI will be a good idea.
> You will get lots of trouble, from hardware perspective (triggering method,
> chipset recognition, interrupt acknowledge), from driver code of two
> different Ethernet cards! Try to relocate either interrupt, so that they
> are different. I don't have a LinkSys personally, but I've configured
> lots of machine in my working place with tulip Ethernet card (DEC chip)
> and 3c509 coexist, and run happily for months.
>
> My 2 cents,
> Stephen.
I'd go 2 cents further -- generic tulips are available for as little as
$30 (I've got two $30 tulip cards in the system I'm writing this on) and
work fine. The Dlink is probably no more than $40. Throw the ISA card
in the trash -- PCI cards are much better in many ways, such as:
a) PCI handles IRQ and ioport assignment transparently and
automatically -- it really is "plug and play" where ISA cards are always
faking it.
b) PCI handles busmastering correctly -- having a
high-interrupt-density card on the ISA bus can supposedly significantly
increase your interrupt latency on the PCI bus because the PCI bus has
to make worst-case assumptions handling ISA interrupt requests. (I say
supposedly because I've read this but not measured it personally.
However, the reading was persuasive, and I've never put myself in the
position to measure it.)
c) Obviously, the PCI bus has far more bandwidth. In fact, the ISA bus
is usually just an attachment to the PCI bridge and you'll be using PCI
bandwidth anyway -- just inefficiently. PCI devices have better latency
as well -- with considerably higher bandwidth one can make more generous
assumptions concerning the time required to fill or empty a fifo buffer
and still be conservative.
I think Don Becker has reported using as many as four tulip cards in one
machine, and I've used as many as three cards (not all tulips, though)
so you should have no real trouble getting multiple cards running,
especially if they are all the same kind (and ideally all PCI).
rgb
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@phy.duke.edu