[Beowulf] Great Lakes cluster

John Hearns hearnsj at googlemail.com
Mon Oct 22 01:09:40 PDT 2018


I will slightly blow my own trumpet here. I think a design which has high
bandwidth uplinks and half speed links to the compute nodes is a good idea.
I would love some pointers to studies on bandwith utilisation on large
scale codes.
Are there really any codes which will use 200Gbps across many nodes
simultaneously?











On Sun, 21 Oct 2018 at 18:57, John Hearns <hearnsj at googlemail.com> wrote:

> A comment from Brock Palane please?
>
> https://www.nextplatform.com/2018/10/18/great-lakes-super-to-remove-islands-of-compute/
>
> I did a bid for a new HPC cluster at UCL in the UK, using FDR adapters and
> 100Gbps switches, making the same arguments abotu cutting down on switch
> counts but still having a non-blocking network (at the time Mellanox were
> promoting FDR by selling it at 40Gbps prices).
>
> But in this article if you have 1x switch in a rack and use all 80 ports
> (with splitters) - there are not many ports left for uplinks!
> I imagine this is 2x 200Gbps switches, with 20 ports of each switch
> equipped with port splitters and the other 20 ports as uplinks.
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20181022/83e4ca37/attachment.html>


More information about the Beowulf mailing list