FIX: 0.99L and timeouts
Andrew Morton
andrewm@uow.edu.au
Wed Apr 19 10:10:10 2000
Bogdan Costescu wrote:
>
> On Wed, 19 Apr 2000, Andrew Morton wrote:
>
> > > As I wrote in my previous message, there are 2 cases (or races) depending
> > > on which variable is updated/tested. In one case it's tx_full (which is
> > > the one described by you), in the other case is cur_tx.
> >
> > True. The fix you proposed closes them both though?
>
> Yes, they were both after the enabling of the interrupts in
> boomerang_start_xmit and now are before the spinlock.
> However, I do have a problem with your current driver: in start_xmit
>
> outw(DownUnstall, ioaddr + EL3_CMD)
>
> is still before
>
> vp->cur_tx++;
> if (vp->curt_tx - vp->dirty_tx ....)
>
> while my proposal (that's how my tests were done) was to move it along
> with restore_flags/spin_unclock _after_ these lines.
> With your driver, after less than 2 minutes of high load, I get an "Unable
> to dereference pointer..." and the system locks; I moved the outw after
> these lines (and even after spin_unlock_irqsave) and now I'm able to
> finish my test without problems. I think this is another race: if you
> unstall the downloading, the card can go on and produces another interrupt
> which needs the vp->cur_tx to compute entry and prev_entry (at the
> beginning, before spin_lock_irqsave) at about the same time as
> vp->cur_tx++...
Man, you're good :-)
It's not an interrupt race. spin_lock_irqsave() disables interrupts on
the local CPU and also grabs the spinlock, so the local CPU can't take
an interrupt and any other CPU will spin on the lock on entry to the ISR
until the local CPU releases the lock.
What's happening here is that the line:
prev_entry->status &= cpu_to_le32(~TxIntrUploaded);
is writing to the tx descriptor in memory, indicating that the NIC
should NOT generate a dnComplete interrupt when it has transferred this
packet into the Tx FIFO (page 85 of 3c59x.pdf). That's because we've
asked it to generate an interrupt for the _current_ packet about twelve
lines beforehand. We only generate an interrupt for the
most-recently-queued packet.
With the DownUnstall where it is, the hardware will immediately download
some packets and raise an interrupt for the descriptor pointed to by
prev_entry, before we've had a chance to clear prev_entry's
TxIntrUploaded bit. This interrupt will be pending, but the local CPU
won't actually take it until it hits the spin_unlock_irqrestore() which
reenables local interrupts. (Another CPU may take it earlier and spin
on the spinlock in the ISR though).
I'm not sure why this should kill the machine::
The code in the ISR within 'if (status & DownComplete)' will advance
through the dirty buffers until it hits the descriptor which caused the
interrupt. This is _supposed_ to be the one written by start_xmit, but
it'll be the one before it because of your race. The next interrupt
should bring things back into sync. The clearing of TxIntrUploaded in
start_xmit will occur too late, but that shouldn't matter.
hmmm... Needs more thought. But your fix is clearly correct.
Suggest you put the DownUnstall immediately _before_ the spin_unlock() -
may as well unstall the hardware as early as possible.
I'm also wondering why we're not clearing TxIntrUploaded in prev_entry
for the other leg of the 'if' statement: where we're setting tx_full.
> > > programming and not so much time... I'll come back with performance
> > > impressions in few days.
> >
> > Thanks.
>
> I do have a first result, but it's about the same as what I got with
> 0.99L. I think that the communication is limited by the high TCP latency
> and not by the driver... But I'll continue benchmarking.
netperf is useful: www.netperf.org. It simply measures one-way TCP
traffic.
> > BTW, are you running full duplex? I'm seeing a few reports about duplex
> > negotiation problems and bad Tx performance when full duplex is forced.
> > (I suspect this may be due to config problems at the other end of the
> > wire though).
>
> Yes, I'm running full duplex. However, I had to modify both 0.99L and your
> driver to get this: set 3c905C as IS_CYCLONE|HAS_NWAY and then comment out
> one line in vortex_probe1:
Which card, precisely? (0x10B7, 0x9200)?
I would have thought that Tornado has NWAY. This probably explains a
few things.
> /* if (vp->info1 & 0x8000) */
> vp->full_duplex = 1;
>
> (this is documented in my post on vortex-bug from 24 Nov 1999). Please
> note that I didn't put much thought on this, I didn't have the docs and I
> just wanted the cards to work in full duplex (as I have 905B cards working
> correctly with the unmodified driver and I know the switch supports full
> duplex - BayNetworks 350-24T).
Good choice of vendor :-) (You'll have to peek at my mail headers for
this one).
So the EEPROM says to disable full-duplex. This is simply the default
behaviour and I assume says nothing about the card capabilities. The
user is supposed to either reprogram the EEPROM or to force the driver
to try full duplex ('insmod 3c59x options=0x207' in our case).
I'm a little reluctant to override the EEPROM for the mainstream driver.
My 905B comes up OK in 100BT full-duplex. Note that full_duplex is set
in vortex_open() if the MII block says it can be done.
> I never forced full duplex on the switch, as Mr. Becker suggested several
> times on different driver lists. IMHO, if both the card and the switch
> support standard auto-negotiation, they should do it.
mm.. Auto-neg is pretty notorious for not working right.
> > BTW2: if you're running half-duplex, what is 'ifconfig' saying about the
> > collision count? I'm seeing a _huge_ number of collisions running
> > 10baseT half duplex. Collisions are being reported at about 30% of the
> > rate of tx packets. This is something I've observed for 6 months on my
> > machines at work (905C) and I assumed it was due to poor LAN design.
>
> AKAIK, you can have large numbers of collisions in half duplex (and
> without flow-control) if you are using hubs (you didn't specify this) and
> you have intense traffic.
>
> > But it seems to be a driver problem.
>
> How did you draw this conclusion?
Well, the 3c59x driver reports a huge number of collisions for me on
hubbed 10baseT (approx 30% as many collisions as there are tx packets).
This is simply running netperf in one direction from one machine to the
other, half duplex.
Ditto on my machine at work. None of the HPUX machines on the same LAN
do this, and none of the other NICs I have at home do this. I'm still
looking into this one.
--
-akpm-
-------------------------------------------------------------------
To unsubscribe send a message body containing "unsubscribe"
to linux-vortex-bug-request@beowulf.org