[Beowulf] Intel pulls networking onto Xeon Phi
atchley at tds.net
Mon Dec 2 05:37:40 PST 2013
I found this vague:
"The adapters could even borrow ideas from the Aries interconnect to give
it some extra goodies not found in standard Ethernet or InfiniBand
I am not sure what Aries currently offers that IB does not.
The issues with Ethernet in HPC are:
1. lack of standard kernel-bypass interface
2. minimum packet size is too large
3. topology discovery protocols
4. lack of multi-pathing
Ethernet got a bad rap for HPC due to TCP/IP/Ethernet and the lack of low
latency switches. As Myricom showed with MX over Ethernet followed by
Mellanox with RoCE, you can get low latency over Ethernet bypassing the
kernel and the TCP stack. Low latency switches from Arista, Gnodal, etc.
help as well.
HPC sends a lot of small messages and various stacks are making use of
8-byte atomics. It is unhelpful to have a 64 byte minimum frame size in
Ethernet topology discovery protocols were designed for environments where
equipment can be changed out, expanded, or otherwise altered. They are
meant to be decentralized and plug-and-play. HPC environment, especially
supercomputers, are static environments that can benefit from centralized
On Mon, Dec 2, 2013 at 5:05 AM, John Hearns <hearnsj at googlemail.com> wrote:
> I guess most of you are familiar with these roadmaps.
> A very good article anyway, specially the second half.
> Exciting stuff about integrated networking right onto those Xen Phi's -
> maybe we will have a return to 'proper' big iron
> supercomputers - albiet with a commodity x86 heart!
> The Caldexa stuff looks interesting too. Can you REALLY just plug 100 000
> nodes in and their built in switches will
> sort everything out? Wow.
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf