[Beowulf] Intel buys QLogic InfiniBand business
Vincent Diepeveen
diep at xs4all.nl
Fri Jan 27 13:42:24 PST 2012
On Jan 27, 2012, at 9:19 PM, Joe Landman wrote:
> On 01/27/2012 03:06 PM, Vincent Diepeveen wrote:
>>
>> On Jan 27, 2012, at 8:29 PM, Håkon Bugge wrote:
>>
>>> Greg,
>>>
>>>
>>> On 23. jan. 2012, at 20.55, Greg Lindahl wrote:
>>>
>>>> On Mon, Jan 23, 2012 at 11:28:26AM -0800, Greg Lindahl wrote:
>>>>
>>>>> http://www.hpcwire.com/hpcwire/2012-01-23/
>>>>> intel_to_buy_qlogic_s_infiniband_business.html
>>>>
>>>> I figured out the main why:
>>>>
>>>> http://seekingalpha.com/news-article/2082171-qlogic-gains-market-
>>>> share-in-both-fibre-channel-and-10gb-ethernet-adapter-markets
>>>>
>>>>> Server-class 10Gb Ethernet Adapter and LOM revenues have recently
>>>>> surpassed $100 million per quarter, and are on track for about
>>>>> fifty
>>>>> percent annual growth, according to Crehan Research.
>>>>
>>>> That's the whole market, and QLogic says they are #1 in the FCoE
>>>> adapter segment of this market, and #2 in the overall 10 gig
>>>> adapter
>>>> market (see
>>>> http://seekingalpha.com/article/303061-qlogic-s-ceo-discusses-
>>>> f2q12-results-earnings-call-transcript)
>
> I found that statement interesting. I've actually not known anything
> about their 10GbE products. My bad.
>
>>>
>>> That can explain why QLogic is selling, but not why Intel is buying.
>>>
>>> 10 years ago, Intel went _out_ of the Infiniband marked, see http://
>>> www.networkworld.com/newsletters/servers/2002/01383318.html
>>>
>>> So has the IB business evolved so incredible well compared to what
>>> Intel expected back in 2002? Do not think so.
>>>
>>> I would guess that we will see message passing/RDMA over
>>> Thunderbolt or similar.
>
> Intel buying makes quite a bit of sense IMO. They are in 10GbE
> silicon
> and NICs, and being in IB silicon and HCAs gives them not only a hedge
> (10GbE while growing rapidly, is not the only high performance network
> market, and Intel is very good at getting economies of scale going
> with
> its silicon ... well ... most of its silicon ... ignoring Itanium here
> ...). Its quite likely that Intel would need IB for its PetaScale
Why buy previous generation IB in such case?
It's about the ethernet of course...
They produce tens of millions of cpu's each quarter and also
announced a SoC (socket on chip).
From SoC's actually the market produces billions a year. So it's
alucrative market, yet highly competative.
Having 10 gigabit ethernet on such SoC and the total at a low price
would give intel a huge lead there
worth dozens of billions a year.
It's not clear to me where all their SoC plans go, but i bet right
now they are open to any market needing SoC's.
Note that many SoC's are dirt cheap. Even in very low volume we speak
about some tens of dollars, cpu included
and other connectivity included.
Price is everything there, yet i guess intel will be offering the
'top' SoC's there with faster cpu's and 10 GigE.
Then they produce a bunch of mainboards.
Think also of upcoming generation of consoles, ipad 3's and similar
products etc - it's not clear
yet which company gets the contracts for upcoming consoles, it's all
wide open for now.
Yet they might sell also a 100+ million of those.
Intel is an attractive company to do business with for console
manufacturers now.
IBM's cell kind of lost momentum there and has nothing new to offer
that really outperforms as it seems.
Also power usage of cell was kind of disappointing.
Initial version PS3 was 220 watts on average and 100% usage it could
go up to 380+ watt.
Try to put that on your couch.
Don't confuse this with the later crunching CELL version, a much
improved chip, used for some supercomputers.
Yet if i remember well, some reports, was it Aad v/d Steen (?)
already predicted it would be not interesting for upcoming
supercomputers
as it is some kind of hybrid chip - which has no long term future.
He was right.
> plans. Someone here postualted putting the silicon on the CPU. Not
> sure if this would happen, but I could see it on an IOH, easily. That
> would make sense (at least in terms of the Westmere designs ... for
> the
> Romley et al. I am not sure where it would make most sense).
>
> But Intel sees the HPC market growth, and I think they realize that
> there are interesting opportunities for them there with tighter high
> performance networking interconnects (Thunderbolt, USB3, IB, 10GbE
> native on all these systems).
>
Undoubtfully they'll try something in the HPC market.
If you already have put lots of cash in development of a product it's
better to put it
on the market.
Based upon their name they'll sell some.
And some years from now they should have something bigtime improved.
Yet realize how complicated it is to tape out a GPU at a new process
technology
if you aren't sure you gonna sell a 100+ million of them.
Such massive projects have to pay back for factories. A product
that's having a potential of not even selling for over a few dozens
of billions of dollars is not even interesting to develop.
Just startup costs for a GPU at a new proces technology is some
dozens of millions for each run and the more complex it is and the
newer the proces technology the more expensive it is.
Realize IBM produces its power7 and bluegene/q upcoming cpu at 45 nm
technology.
GPU's release now in 28 nm. That's giving theoretically an advantage
of a tad less of (45 / 28) ^ 2 = 2.58
So a gpu of intel needs to be factor 2.58 better in the same proces
technology than todays gpu's of
AMD (already released 28 nm) and Nvidia (coming soon 28 nm i'd expect).
This where with cpu's, intels big advantage is always that they are
better in getting newer proces technologies to work sooner than the
competition.
Ivy Bridge will be 22 nm so i heard rumours.
>> Qlogic offers that QDR.
>> Mellanox is a generation newer there with FDR.
>>
>> Both in latency as well as in bandwidth a huge difference.
>
> Haven't looked much at FDR or EDR latency. Was it a huge delta (more
> than 30%) better than QDR? I've been hearing numbers like 0.8-0.9 us
> for a while, and switches are still ~150-300ns port to port. At some
Posting here some months ago from Gilad Shainer was it's 0.85 us RDMA
for FDR versus 1.3 us or so for the other;
more importantly for clusters is the bandwidth.
I guess that pci-e 3.0 allows simply much higher speeds whereas the
QDR is PCI-E 2.0 stuff.
Isn't pci-e 3.0 about 2x higher bandwidth than 2 pci-e 2.0?
Now i might be happy with that last, but i guess that for big FFT's
or be it matrice,
you still need massive bandwidth.
Even if n is big in O ( k * n log n )
Where k in case of matrice is a tad bigger than n and in case of
Number Theory is usually around the number of bits,
so 3.32 times n or so, that means you still need k steps of n log n.
That's massive bandwidth.
> point I think you start hitting a latency floor, bounded in part by
> "c",
> but also by an optimal technology path length that you can't shorten
> without significant investment and new technology. Not sure how close
> we are to that point (maybe someone from Qlogic/Mellanox could comment
> on the headroom we have).
There is a lot of headroom for better latencies from software viewpoint,
as cpu's keep getting faster yet latency of years ago networks was
just marginally
worse than what's there now.
In case of hardware i really am no expert there.
>
> Bandwidth wise, you need E5 with PCIe 3 to really take advantage of
> FDR.
> So again, its a natural fit, especially if its LOM ....
>
All the socket2011 boards that are in the shops now are PCI-e 3.0 and
a wave of
mainboards with 2 sockets will release a few days before or at the
same day that
intel finally releases the Xeon version of Sandy Bridge.
Seems it didn't release yet as it's not too high clocked, if i look
at this sample cpu :)
It's 2Ghz to be precise (8 cores Xeon).
> Curiously, I think this suggests that ScaleMP could be in play on the
> software side ... imagine stringing together bunches of the LOM FDR/
> QDR
> motherboards with E5's and lots of ram into huge vSMPs (another
> thread).
> Shai may tell me I'm full of it (hope he doesn't), but I think
> this is
> a real possibility. The Qlogic purchase likely makes this even more
> interesting for Intel (or Cisco, others as a defensive acq).
>
A technology that just sold to 300 machines, this is not interesting
market for intel.
They have very expensive factories that each cost many billions of
dollars.
These need to produce nonstop and sell products, to pay back for the
factories and to make a profit.
Intel used to be worth over a 100 billion dollar at NASDAQ.
Wasting your most clever engineers, from which each company always
has too few, to products that can't keep busy your
factories, is a total waste of time. So your huge base of B-class
engineers, let me not quote some mailing list names,
that's the ones you move to Qlogic then for the HPC.
That's enough to keep it afloat for a while in combination with
'intel inside'.
Intels profit is too huge to be busy toying with tiny markets with a
handful of customers,
from which majority forgot to take their medicine when you propose
rewriting the software to some new hardware platform
you are gonna unroll. A habit intel is not exactly excited about of
course, as they like to sell each time new technology.
Also each larrabee intel would sell means they sell a bunch of xeons
less of course.
> We sure do live in interesting times!
>
Not for everyone i guess - many lost their job and as i predicted
some years ago a guy with a
nobel prize might be carpet bombing a huge nation this summer.
Intel has 3 huge factories in Israel last time i checked.
It sure can give unpredicted results for future.
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics Inc.
> email: landman at scalableinformatics.com
> web : http://scalableinformatics.com
> http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax : +1 866 888 3112
> cell : +1 734 612 4615
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
More information about the Beowulf
mailing list