FastE NIC Recommendation

Donald Becker becker at scyld.com
Mon Nov 18 07:18:42 PST 2002


On Mon, 18 Nov 2002, Steffen Persvold wrote:
> On Mon, 18 Nov 2002, Trent Piepho wrote:
> > On Mon, 18 Nov 2002, Steffen Persvold wrote:
> > > The Intel e1000 (which is in the Linux kernel tree btw.) has a ping-pong/2 
> > > latency of about 25us on a decent Platform (Xeon, E7500 chipset). The 
> > > Broadcom 5700 series (with the tg3 driver) has approx. 30us ping-pong/2 
> > > latency on a ServerWorks GC-LE chipset (also Xeon processors). _But_ if 
> > > you try e1000 on a AMD 760 MPX chipset for instance, the latency increases 
> > > to 65 us ping-pong/2. YMMV.
> > 
> > Is that an interaction between the 760MPX and the e1000, or typical for the
> > 760MPX?  ie. does the broadcom 5700 also have high latency with AMD chipsets?
>
> Unfortunately I haven't tested the Broadcom adapters on 760 MPX yet 
> (actually the only reason I've tested Broadcom at all was that it was 
> onboard on our Dell 2650s). However, I've tested e1000 on PentiumIII 
> w/ServerWorks HE-SL chipset, and there the latency was 30us. Maybe 
> there's something with the AMD IOAPICs ? From what I've seen it seem like 
> most of the latency is produced in the receiving node IRQ processing.

Yes, that's where a significant part of the increased latency is.
Compare this _difference_ to the quoted very low message latency of
cluster network adapters -- the numbers don't make sense.

The reason is they are comparing different latencies.  Those adapters
are quoting the time when the CPU is doing nothing but polling for the
arrival of a message, with the cache hot.  The Gigabit Ethernet latency
is measured when the system is busy doing other work and is interrupted
to process an unexpected new message.

Unlike the case of a global shared memory, low latency is not always a
good thing.  Gigabit Ethernet adapters go to some effort to _increase_
latency so that the CPU can process multiple messages during each
interrupt.  This interrupt mitigation might even be clever enough to
look at the destination of the next incoming packet to decide if the
interrupt should be deferred.

-- 
Donald Becker				becker at scyld.com
Scyld Computing Corporation		http://www.scyld.com
410 Severn Ave. Suite 210		Scyld Beowulf cluster system
Annapolis MD 21403			410-990-9993




More information about the Beowulf mailing list