[Beowulf] Great Lakes cluster
hearnsj at googlemail.com
Sun Oct 21 10:57:40 PDT 2018
A comment from Brock Palane please?
I did a bid for a new HPC cluster at UCL in the UK, using FDR adapters and
100Gbps switches, making the same arguments abotu cutting down on switch
counts but still having a non-blocking network (at the time Mellanox were
promoting FDR by selling it at 40Gbps prices).
But in this article if you have 1x switch in a rack and use all 80 ports
(with splitters) - there are not many ports left for uplinks!
I imagine this is 2x 200Gbps switches, with 20 ports of each switch
equipped with port splitters and the other 20 ports as uplinks.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Beowulf