<div dir="ltr"><div>I will slightly blow my own trumpet here. I think a design which has high bandwidth uplinks and half speed links to the compute nodes is a good idea.</div><div>I would love some pointers to studies on bandwith utilisation on large scale codes.</div><div>Are there really any codes which will use 200Gbps across many nodes simultaneously?</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Sun, 21 Oct 2018 at 18:57, John Hearns <<a href="mailto:hearnsj@googlemail.com">hearnsj@googlemail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>A comment from Brock Palane please?</div><div dir="ltr"><a href="https://www.nextplatform.com/2018/10/18/great-lakes-super-to-remove-islands-of-compute/" target="_blank">https://www.nextplatform.com/2018/10/18/great-lakes-super-to-remove-islands-of-compute/</a></div><div dir="ltr"><br></div><div>I did a bid for a new HPC cluster at UCL in the UK, using FDR adapters and 100Gbps switches, making the same arguments abotu cutting down on switch counts but still having a non-blocking network (at the time Mellanox were promoting FDR by selling it at 40Gbps prices).</div><div><br></div><div>But in this article if you have 1x switch in a rack and use all 80 ports (with splitters) - there are not many ports left for uplinks!</div><div>I imagine this is 2x 200Gbps switches, with 20 ports of each switch equipped with port splitters and the other 20 ports as uplinks.<br></div><div><br></div><div><br></div><div><br></div></div>
</blockquote></div>