[Beowulf] Intel buys QLogic InfiniBand business
Gilad Shainer
Shainer at Mellanox.com
Mon Jan 30 11:22:24 PST 2012
> >> out of curiosity, has anyone set up a head-to-head comparison (two or
> >> more identical machines, both with a Qlogic and a Mellanox card of
> >> the same vintage)?
> >>
> >> There was a bit of discussion of InfiniBand benchmarking in this
> >> thread
> > and it seems it would be helpful to the casual readers like myself to
> > have a few references to benchmarking toolkits and actual results.
> >
> > Most often reported results are gathered with either Netpipe from Ames
> > or Intel MPI Benchmark (formerly known as Palas Benchmark) or OSU
> > Micro-benchmarks.
> >
> > Searching the web produced a recent report from Swiss CSCS where a
> > Mellanox
> > ConnectX3 QDR HCA with a Mellanox switch is set against a Qlogic 7300
> > QDR HCA connected to a Qlogic switch.
> > http://www.cscs.ch/fileadmin/user_upload/customers/cscs/Tech_Reports/P
> > erformance_Analysis_IB-QDR_final-2.pdf
>
> as far as I can tell, this paper mainly says "a coalescing stack delivers
> benchmark results showing a lot higher bandwidth and message rate than a
> non-coalescing stack." the comment on figure 8:
>
> To some extent, the environment variables mentioned before
> contribute to this outstanding result
>
> which is remarkably droll. I'm not sure how well coalescing works for real
> applications.
First, I looked on the paper and it includes latency and bandwidth comparison as well, not only message rate. It is important for others to know that, and not to dismiss it. Second, both companies have options for message coalescing. You can chose to use it or not - I saw apps that got a benefit from it, and saw applications that does not. Without coalescing Mellanox provides around 30M message per second.
-Gilad.
More information about the Beowulf
mailing list