<div dir="ltr">On Wed, Apr 8, 2015 at 9:56 PM, Greg Lindahl <span dir="ltr"><<a href="mailto:lindahl@pbm.com" target="_blank">lindahl@pbm.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Apr 08, 2015 at 03:57:34PM -0400, Scott Atchley wrote:<br>
<br>
> There is concern by some and outright declaration by others (including<br>
> hardware vendors) that MPI will not scale to exascale due to issues like<br>
> rank state growing too large for 10-100 million endpoints,<br>
<br>
</span>That's weird, given that it's an implementation choice.<br></blockquote><div><br></div><div>It is one of the concerns raised, but not the only one. No one is giving up on MPI; that is not an alternative given the existing code base. There are efforts to avoid duplication of rank information within a node (no need to each rank to have duplicates) or use a single MPI rank per node and use OpenMP/other to manage node-local parallelism at the risk of a large many-core node's cores all trying to access the NIC at the same time.</div><div><br></div><div>I am not advocating for/against MPI or predicting its imminent demise, but I am aware of the concerns by the vendors.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Presumably Intel is keeping the PathScale tiny rank state as a<br>
feature?<br></blockquote><div><br></div><div>One would expect, but that is probably necessary but not sufficient for their many-core future.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Reliability, now that's a serious issue! And not one that's trivially<br>
fixed for any problem that must be tightly-coupled.<br></blockquote><div><br></div><div>Yes, and we are open to suggestions. ;-) </div></div></div></div>