[Beowulf] 10 GbE
Greg Lindahl
lindahl at pbm.com
Wed Feb 11 17:36:37 PST 2009
On Wed, Feb 11, 2009 at 12:57:01PM +0000, Igor Kozin wrote:
> - Switch latency (btw, the data sheet says x86 inside);
Since almost all of the "latency" is in the endpoints, the best way to
measure this is with 0, 1, 2 switches between 2 nodes. If your
measurements are accurate enough (look at the dispersion), you can
see the switch latency.
> - Netxen NX3-20GxR card vs Intel 10 GbE AD DA card.
Endpoint latency for ethernet cards depends on a lot of things;
describing them with "latency" and "bandwidth" is perhaps even sillier
than doing so for InfiniBand. MPI programs with short, bursty
communications are not so well suited for TCP offload engines, which
are aimed at reducing overhead for big transfers. For MPI, it's much
more ideal to do what Myricom is doing. Other than that, you may find
that dumb cards do better than offload cards, OpenMX does better than
MPI over TCP, and the more neighbors you're talking to, the worse TCP
offload does. I suspect it has a lot more variables than the typical
interconnect evaluation that Daresbury does.
-- greg
More information about the Beowulf
mailing list