[Beowulf] Great Lakes cluster

John Hearns hearnsj at googlemail.com
Sun Oct 21 10:57:40 PDT 2018

A comment from Brock Palane please?

I did a bid for a new HPC cluster at UCL in the UK, using FDR adapters and
100Gbps switches, making the same arguments abotu cutting down on switch
counts but still having a non-blocking network (at the time Mellanox were
promoting FDR by selling it at 40Gbps prices).

But in this article if you have 1x switch in a rack and use all 80 ports
(with splitters) - there are not many ports left for uplinks!
I imagine this is 2x 200Gbps switches, with 20 ports of each switch
equipped with port splitters and the other 20 ports as uplinks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20181021/4a59c47a/attachment.html>

More information about the Beowulf mailing list