<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>On Feb 26, 2010, at 12:36 PM, <a href="mailto:richard.walsh@comcast.net">richard.walsh@comcast.net</a> wrote:</div><div><br class="Apple-interchange-newline"><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div><div style="font-family: Arial; font-size: 12pt; color: rgb(0, 0, 0); "><br>Mark Hahn wrote:<br><br>>> Doesn't this assume worst case all-to-all type communication<br>>> patterns.<br>><br>>I'm assuming random point-to-point communication, actually.<div><br></div><div>A sub-case of all-to-all (possibly all-to-all). So you are assuming</div><div>random point-to-point is a common pattern in HPC ... mmm ... I</div><div>would call it a worse case pattern, something more typical of </div><div>graph searching codes like they run at the NSA. Sure a high</div><div>radix switch (or better yet a global memory address space, Cray</div><div>X1E) is good and designed for this worst-case, but not sure this</div><div>is the common case data reference pattern in HPC ... if it were</div><div>they would be selling more global memory systems at Cray and</div><div>SGI (not just to the NSA).</div></div></div></span></blockquote><div><br></div><div>Designing the communications network for this worst-case pattern has a</div><div>number of benefits: </div><div><br></div><div>* it makes the machine less sensitive to the actual communications pattern</div><div>* it makes performance less variable run-to-run, when the job controller</div><div>chooses different subsets of the system</div><br><blockquote type="cite"><span class="Apple-style-span" style="border-collapse: separate; font-family: Helvetica; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div><div style="font-family: Arial; font-size: 12pt; color: rgb(0, 0, 0); "><div><br></div><div>There you might also want a machine like the Cray XMT where</div><div>the memory is flat and stalled threads can be switched out for</div><div>another thread. </div><div><br>>> If you are just trading ghost cell data with your neighbors<br>>> and you have placed your job smartly on the torus the fan out<br>>> advantage mentioned is irrelevant. No?<br></div></div></div></span></blockquote><div><br></div><div>Smart placement is a lot harder than it appears.</div><div>* The actual communications pattern often doesn't match preconceptions</div>* Communications from concurrently running applications can interfere.</div><div><br></div><div>There's a paper in the IBM Journal of Research and Development about this,</div><div>they wound up using simulated annealing to find good placement on the most</div><div>regular machine around, because the "obvious" assignments weren't </div><div>optimal.</div><div><br></div>...<br><div><br></div><div>In addition to this stuff, the quality of the interconnect has other effects</div><div><br></div><div>* a fast, low latency interconnect lets the application scale effectively to larger</div><div>numbers of nodes before performance rolls off</div><div>* an interconnect with low latency short messages provides a decent base for</div><div>PGAS languages like UPC and CoArray Fortran or for lightweight communications</div><div>APIs like SHMEM or active messages.</div><div><br></div><div>Personally, I believe our thinking about interconnects has been poisoned by thinking that NICs are I/O devices. We would be better off if they were coprocessors. Threads should be able to send messages by writing to registers, and arriving packets should activate a hyperthread that has full core capabilities for acting on them, and with the ability to interact coherently with the memory hierarchy from the same end as other processors. We had started kicking this around for the SiCortex gen-3 chip, but were overtaken by events.</div><div><br></div><div>-Larry</div><div><br></div><div><br></div></body></html>