[Beowulf] Intel pulls networking onto Xeon Phi
atchley tds.net
atchley at tds.net
Mon Dec 2 05:41:26 PST 2013
On Mon, Dec 2, 2013 at 8:37 AM, atchley tds.net <atchley at tds.net> wrote:
> I found this vague:
>
> "The adapters could even borrow ideas from the Aries interconnect to give
> it some extra goodies not found in standard Ethernet or InfiniBand
> controllers."
>
> I am not sure what Aries currently offers that IB does not.
>
> The issues with Ethernet in HPC are:
>
> 1. lack of standard kernel-bypass interface
> 2. minimum packet size is too large
> 3. topology discovery protocols
> 4. lack of multi-pathing
>
> Ethernet got a bad rap for HPC due to TCP/IP/Ethernet and the lack of low
> latency switches. As Myricom showed with MX over Ethernet followed by
> Mellanox with RoCE, you can get low latency over Ethernet bypassing the
> kernel and the TCP stack. Low latency switches from Arista, Gnodal, etc.
> help as well.
>
> HPC sends a lot of small messages and various stacks are making use of
> 8-byte atomics. It is unhelpful to have a 64 byte minimum frame size in
> this case.
>
> Ethernet topology discovery protocols were designed for environments where
> equipment can be changed out, expanded, or otherwise altered. They are
> meant to be decentralized and plug-and-play. HPC environment, especially
> supercomputers, are static environments that can benefit from centralized
> management.
>
> Ethernet re
>
<fat fingered send>
Ethernet requires a single-path between any two endpoints. Future HPC
networks will not be "non-blocking" (i.e. not full Clos or fat-tree) due to
cost. They will be oversubscribed and they will have bottlenecks. Some
papers about alternate topologies such as dragonfly describe the necessity
to have alternate, albeit non-shortest path, routes to avoid congested
paths.
There may be other issues, but they will need to be addresses.
Scott
> On Mon, Dec 2, 2013 at 5:05 AM, John Hearns <hearnsj at googlemail.com>wrote:
>
>>
>> http://www.enterprisetech.com/2013/11/25/intel-pull-networking-xeon-xeon-phi-chips/
>>
>> I guess most of you are familiar with these roadmaps.
>> A very good article anyway, specially the second half.
>> Exciting stuff about integrated networking right onto those Xen Phi's -
>> maybe we will have a return to 'proper' big iron
>> supercomputers - albiet with a commodity x86 heart!
>>
>>
>>
>> The Caldexa stuff looks interesting too. Can you REALLY just plug 100 000
>> nodes in and their built in switches will
>> sort everything out? Wow.
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20131202/e08b7fc5/attachment.html>
More information about the Beowulf
mailing list