[Beowulf] Questions regarding interconnects
Greg Lindahl
lindahl at pathscale.com
Thu Mar 24 18:41:36 PST 2005
On Sun, Mar 20, 2005 at 07:56:35PM +0200, Olli-Pekka Lehto wrote:
> What do you see as the key differentiating factors in the quality of an
> MPI implementation? This far I have come up with the following:
> -Completeness of the implementation
> -Latency/bandwidth
> -Asynchronous communication
> -Smart collective communication
Those are superficial differences. What people actually want is
performance. If dumb collectives gave better performance, would you
actually care that they were dumb? What if collective performance was
dominated by (1) small packet latency and (2) OS jitter?
Likewise, people want asynchronous communication because they imagine
that it will give them better performance.
Finally, latency/bandwidth is less relevant to real apps than the
latency/bandwidth at the message size that the apps actually use. For
most interconnects, the predicted latency/bandwidth at 2k packets
isn't that close to what you'd predict from published 0-byte latency
and infinite-size bandwidth.
> Are there any NICs on the market which utilize the 10GBase-CX4
> standard and if there is are there any clusters which use them?
You can't buy a big switch for it, so there might be small clusters,
but people don't talk about small clusters much. Orion's 96-node
clusters, if I read my tea leaves right, are hooked together using
10G-CX4 uplinks. But that's just building a 96-port 1-gig switch for
cheap.
> When do you estimate that commodity Gigabit NICs with integrated RDMA
> support will arrive to the market? (or will they?)
They arrived a while ago, didn't seem to make much of a splash. I don't
personally think much of offload.
Just one man's (likely-to-be-disputed) opinion,
-- greg
More information about the Beowulf
mailing list