[Beowulf] Intel buys QLogic InfiniBand business
lindahl at pbm.com
Fri Jan 27 14:27:23 PST 2012
On Fri, Jan 27, 2012 at 03:19:31PM -0500, Joe Landman wrote:
> >>> That's the whole market, and QLogic says they are #1 in the FCoE
> >>> adapter segment of this market, and #2 in the overall 10 gig adapter
> >>> market (see
> >>> http://seekingalpha.com/article/303061-qlogic-s-ceo-discusses-
> >>> f2q12-results-earnings-call-transcript)
> I found that statement interesting. I've actually not known anything
> about their 10GbE products. My bad.
I'm not surprised, as this 10ge adapter is aimed at the same part of
the market that uses fibre channel, which isn't that common in HPC. It
doesn't have the kind of TCP offload features which have been
(futilely) marketed in HPC; it's all about running the same fibre
channel software most enterprises have run for a long time, but having
the network be ethernet.
> Haven't looked much at FDR or EDR latency. Was it a huge delta (more
> than 30%) better than QDR? I've been hearing numbers like 0.8-0.9 us
> for a while, and switches are still ~150-300ns port to port.
Are you talking about the latency of 1 core on 1 system talking to 1
core on one system, or the kind of latency that real MPI programs see,
running on all of the cores on a system and talking to many other
systems? I assure you that the latter is not 0.8 for any IB system.
> At some
> point I think you start hitting a latency floor, bounded in part by "c",
Last time I did the computation, we were 10X that floor. And, of
course, each increase in bandwidth usually makes latency worse, absent
heroic efforts of implementers to make that headline latency look
More information about the Beowulf