[Beowulf] Intel buys QLogic InfiniBand business

Gilad Shainer Shainer at Mellanox.com
Sat Jan 28 21:03:31 PST 2012

> >>> So I wonder why multiple OEMs decided to use Mellanox for on-board
> >>> solutions and no one used the QLogic silicon...
> >>
> >> That's a strange argument.
> >
> > It is not an argument, it is stating a fact.
> you are mistaken.  you ask a pointed question - do not construe it as a
> statement of fact.  if you wanted to state a fact, you might say:
> "multiple OEMs decided to use Mellanox and none have used Qlogic".

You probably meant to say "I think differently" and not "you are mistaken".... Making this mailing list little more polite will benefit us all.  
> by stating this, you are implying that Mellanox is superior in some way, though
> another perfectly adequate explanation could be that Qlogic didn't offer their
> chips to OEMs, or did so at a higher price.  (in fact, the latter would suggest the
> possibility that Qlogic chips are actually worth more.)  note my use of
> subjunctive here.
> in reality, Mellanox is the easy choice - widely known and used, the default.
> OEMs are fond of making easy choices: more comfortable to a lazy customer,
> possibly lower customer support costs, etc.
> this says nothing about whether an easy choice is a superior solution to the
> customer (that is, in performance, price, etc).

OEMs don't place devices on the motherboard just because they can, not because it is cheaper. They do so because they believe it will benefit their users, hence they will sell more. I can assure you that silicon was offered from both companies, and it wasn't an issue of price. From this point you can make any conclusion that you wish to. 

> >good validation for InfiniBand as a leading solution for any server and
> >storage connectivity.
> besides Lustre, where do you see IB used for storage?

Protocols: iSER (iSCSI), NFSoRDMA, SRP, GPFS, SMB and others
OEMs: DDN, Xyratex, Netapp, EMC, Oracle, SGI, HP, IBM and others. 

> > Going into a bit more of a technical discussion... QLogic way of networking
> >is doing everything in the CPU, and Mellanox way is to implement if all in
> >the hardware (we all know that).
> this is a dishonest statement: you know that QLogic isn't actually trying
> to do *everything* in the CPU.

You are right, you do need a HW translation from PCIe to IB. But I am sure you know where the majority of the transport, error handling etc is being done....

> > The second option is a superset, therefore
> >worse case can be even performance.
> this is also dishonest: making the adapter more intelligent clearly
> introduces some tradeoffs, so it's _not_ a superset.  unless you are
> claiming that within every Mellanox adapter is _literally_ the same
> functionality, at the same performance, as is in a Qlogic adapter.

It is not dishonest. In general offloading is a superset. You can chose to implement just offloading or to leave room for CPU control as well. There will always be parts that are better to be in HW, and if you have flexibility for the rest it is a superset.  

> >> Maybe we could have a few less attacks, complaining and hand waving and
> >> more useful information?  IMO Greg never came across as a commercial
> >> (which beowulf list isn't an appropriate place for), but does regularly
> contribute
> >> useful info.  Arguing market share as proof of performance superiority is
> just
> >> silly.
> >
> > I am not sure about that... quick search in past emails can show amazing
> things...
> > I believe most of us are in agreement here. Less FUD, more facts.
> "facts" in this context (as opposed to FUD, armwaiving, etc) must be
> dispassionate and quantifiable.  not hyperbole and suggestive rhetoric.

Maybe we read different emails.

> out of curiosity, has anyone set up a head-to-head comparison
> (two or more identical machines, both with a Qlogic and a Mellanox card of
> the same vintage)?
> regards, mark hahn.

More information about the Beowulf mailing list