<div dir="ltr">I will supper C's "hater" listing effort just to keep a spot light on the important subject.<div><br></div><div>The question is not MPI is efficient or not. Fundamentally, all electronics will fail in unexpected ways. Bare metal computing was important decades ago but detrimental to large scale computing. It is simply flawed for extreme scale computing.</div><div><br></div><div>The Alan Fekete, Nancy Lynch, John Spinneli's impossible proof is the fundamental "line in the sand" that cannot be crossed.</div><div><br></div><div>The corollary of that proof is that it is impossible to detect failure reliably either. Therefore, those efforts for for runtime detection/repair/reschedule are also flawed for extreme scale computing.</div><div><br></div><div>Justin </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 10, 2016 at 8:44 AM, Lux, Jim (337C) <span dir="ltr"><<a href="mailto:james.p.lux@jpl.nasa.gov" target="_blank">james.p.lux@jpl.nasa.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-serif">
<div>This is interesting stuff.</div>
<div>Think back a few years when we were talking about checkpoint/restart issues: as the scale of your problem gets bigger, the time to checkpoint becomes bigger than the time actually doing useful work.</div>
<div>And, of course, the reason we do checkpoint/restart is because it’s bare-metal and easy. Just like simple message passing is “close to the metal” and “straightforward”.</div>
<div><br>
</div>
<div>Similarly, there’s “fine grained” error detection and correction: ECC codes in memory; redundant comm links or retries. Each of them imposes some speed/performance penalty (it takes some non-zero time to compute the syndrome bits in a ECC, and some non-zero
time to fix the errored bits… in a lot of systems these days, that might be buried in a pipeline, but the delay is there, and affects performance)</div>
<div><br>
</div>
<div>I think of ECC as a sort of diffuse fault management: it’s pervasive, uniform, and the performance penalty is applied evenly through the system. Redundant (in the TMR sense) links are the same way.</div>
<div><br>
</div>
<div>Retries are a bit different. The “detecting” a fault is diffuse and pervasive (e.g. CRC checks occur on each message), but the correction of the fault is discrete and consumes resources at that time. In a system with tight time coupling (a pipelined
systolic array would be the sort of worst case), many nodes have to wait to fix the one that failed.</div>
<div><br>
</div>
<div>A lot depends on the application: tighter time coupling is worse than embarrassingly parallel (which is what a lot of the “big data” stuff is: fundamentally EP, scatter the requests, run in parallel, gather the results).</div>
<div><br>
</div>
<div>The challenge is doing stuff in between: You may have a flock with excess capacity (just as ECC memory might have 1.5N physical storage bits to be used to store N bits), but how do you
<span style="font-weight:bold">automatically</span> distribute the resources to be failure tolerant. The original post in the thread points out that MPI is not a particularly facile tool for doing this. But I’m not sure that there is a tool, and I’m not
sure that MPI is the root of the lack of tools. I think it’s that moving from close to the metal is a “hard problem” to do in a generic way. (The issues about 32 bit counts are valid, though)</div>
<div><br>
</div>
<div><br>
</div>
<div>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)">James Lux, P.E.<u></u><u></u></span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)">Task Manager, DHFR Space Testbed</span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)">Jet Propulsion Laboratory<u></u><u></u></span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)">4800 Oak Grove Drive, MS 161-213<u></u><u></u></span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)">Pasadena CA 91109<u></u><u></u></span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)"><a href="tel:%2B1%28818%29354-2075" value="+18183542075" target="_blank">+1(818)354-2075</a><u></u><u></u></span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><span style="color:rgb(31,73,125)"><a href="tel:%2B1%28818%29395-2714" value="+18183952714" target="_blank">+1(818)395-2714</a> (cell)</span></p>
<p class="MsoNormal" style="margin:0in 0in 0.0001pt;font-size:11pt"><u></u> <u></u></p>
</div>
</div>
<br>_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
<br></blockquote></div><br></div>