[Beowulf] Mellanox Multi-host

Scott Atchley e.scott.atchley at gmail.com
Wed Mar 11 07:44:16 PDT 2015


Looking at this and the above link:

http://www.mellanox.com/page/press_release_item?id=1501

It seems that the OCP Yosemite is a motherboard that allows four compute
cards to be plugged into it. The compute cards can even have different CPUs
(x86, ARM, Power). The Yosemite board has the NIC and connection to the
switch. It is not clear if the "multi-host connection" is tunneled over the
PCIe connection between the compute card and the Yosemite board or if
network communication is handled over the compute card's NIC to the
aggregator on the Yosemite board. Expect it is tunneled over PCIe, but more
details would be nice.

It seems the whole OCP Yosemite project is geared towards avoiding NUMA and
using cheaper, simpler CPUs.

On Wed, Mar 11, 2015 at 8:51 AM, John Hearns <hearnsj at googlemail.com> wrote:

> Talking about 10Gbps networking... and above:
>
>
> http://www.theregister.co.uk/2015/03/11/mellanox_adds_networking_specs_to_ocp/
>
> "In the configuration Mellanox demonstrated, a 648-node cluster would only
> need 162 each of NICs, ports and cables."
>
> So looks like one switch port can fan out to four hosts,
> and they talk about mixing FPGA and GPU
> Might make for a very interesting cluster.
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20150311/650e6903/attachment.html>


More information about the Beowulf mailing list