[Beowulf] 3c2000 recommendations?
hahn at physics.mcmaster.ca
Mon Jun 27 22:36:09 PDT 2005
> "128 KB deep packet buffer" and "Processing offloads: TCP/UDP/IP checksum".
128KB seems fairly modest by today's standards (onchip dram of megabytes),
but does it really matter? afaikt, PCI arbitation is controlled to around
16 us (max_lat=64), which is a window of 2K*width (16K for 64b wide).
so keeping ~1ms of buffers seems like it might be useful, but overkill.
almost any card these days will do checksum offload, at least the ~5 I could
recognize in 126.96.36.199.
> The processing offload really interests me, but I'm skeptical. The price is a
> little more than the Intel Pro/1000. I didn't find any reference to this
> adapter on this list and not much information about using this adapter in a
> cluster (MPI codes) on the net. Does anyone have any experience with this
I have a hard time believing that any reasonable nic would behave differently
with plain old gigabit loads. in fact, I'm a bit surprised by how much of
a difference the RDMA folk claim, but that seems to be due to kernel-bypass,
rather than TOE-ish features. even RDMA types don't claim much better than
about 15 us latency, which is pretty uninteresting given Myri at 2-3 and
quadrics/infinipath at 1.3 or so. (cost comparisons need to look at whole
system configs, which makes faster/expensive interconnect look good unless
you manage to keep the per-node price incredibly low...)
regards, mark hahn.
More information about the Beowulf