<div dir="ltr">Just noticed that Jim works for JPL. For space communications, fat pipe can be very expensive but more pipes are possible from multiple satellite relays.<div><br></div><div>Remote machines need even more SMC architecture help for both resilience and energy efficiency reasons.</div><div><br></div><div>Justin</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Mar 12, 2016 at 11:59 AM, Justin Y. Shi <span dir="ltr"><<a href="mailto:shi@temple.edu" target="_blank">shi@temple.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Great conversations for starting an exciting weekend!<div><br></div><div>The critical architecture feature for extreme scale of anything is its growth potential. Even for the large data transfer example, the architecture does not breakdown. Fatter pipes and more pipes are guaranteed to solve the problems at hand.</div><div><br></div><div>The performance trading I mentioned was meant for computing, not for communication. The "bare metal" APIs force the programs onto the hardware so that the granularity of processing cannot be adjusted after the programs are compiled. No one is willing to recode because the hardware is shared with others or simply upgraded with a different processor or interconnect. This results in great efficiency losses that most of us do not want to see.</div><div><br></div><div>The compiler and dependency questions were also well taken here. Syntacitc "surgars" cannot solve the dependency issues since they are ultimately static at the compile time. Even most savvy programmers cannot guarantee the correct and most efficient partitioning. In my humble opinion, leaving this mess to the runtime system is the only way out. We have tested this hypothesis by comparing optimized MPI program against SMC (statistic multiplexed computing) program with varying processing granularities. We have recorded consistent wins despite of larger overheads of SMC runtime "data matching/fault tolerance engine". Two years ago, we even compared MPI with SMC wrapped MPI and still scored consistent wins. Unfortunately, most people in the SC committees did not believe these results. And no one is willing to try for themselves either. I do not see any downloads from my github site except for my own students.</div><div><br></div><div>Thanks again to Mr. C for restarting the life of this list!</div><div><br></div><div>Enjoy the great sunny weekend!</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Justin</div><div><br></div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 11, 2016 at 4:28 PM, Lux, Jim (337C) <span dir="ltr"><<a href="mailto:james.p.lux@jpl.nasa.gov" target="_blank">james.p.lux@jpl.nasa.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple">
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">I’ll agree that architecture is the key.
<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Packet switching breaks when you need to move petabytes in a reasonable time across large distances, because each intermediate node has to have storage for DataRate*RoundTripTime
(so that flow control works). If you’re moving Tbits/sec (e.g. something like 3d medical images) across the country, that means your store and forward switches need to store a fair amount of data.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">That’s a narrow example..<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">But I’m not sure you can always trade things to get more reliability without impacting performance.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">At some point, you can probably take an information theory approach. You’ve got a channel with a certain error rate, and you want to push a certain data rate
through it at a particular (lower) error rate. <u></u><u></u></span></p><span>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Jim Lux<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><a href="tel:%28818%29354-2075" value="+18183542075" target="_blank">(818)354-2075</a> (office)<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><a href="tel:%28818%29395-2714" value="+18183952714" target="_blank">(818)395-2714</a> (cell)<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><u></u> <u></u></span></p>
</span><p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"> Justin Y. Shi [mailto:<a href="mailto:shi@temple.edu" target="_blank">shi@temple.edu</a>]
<br>
<b>Sent:</b> Thursday, March 10, 2016 3:12 PM<span><br>
<b>To:</b> Lux, Jim (337C) <<a href="mailto:james.p.lux@jpl.nasa.gov" target="_blank">james.p.lux@jpl.nasa.gov</a>><br>
</span><b>Cc:</b> Douglas Eadline <<a href="mailto:deadline@eadline.org" target="_blank">deadline@eadline.org</a>>; <a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a></span></p><div><div><br>
<b>Subject:</b> Re: [Beowulf] MPI, fault handling, etc.<u></u><u></u></div></div><p></p><div><div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal">Not only high reliability is possible, incrementally higher performance is also possible at the same time or the Internet would crumble by now.<u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">The key is in the architecture. The overheads can be traded for performance gains elsewhere without sacrificing reliability or scalability.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">In other words, the packet switching network proved that you can have the cake and eat it at the same time.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Justin<u></u><u></u></p>
</div>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal">On Thu, Mar 10, 2016 at 4:43 PM, Lux, Jim (337C) <<a href="mailto:james.p.lux@jpl.nasa.gov" target="_blank">james.p.lux@jpl.nasa.gov</a>> wrote:<u></u><u></u></p>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in">
<div>
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">High reliability is possible, at the expense of substantial additional resources. Packet switching
requires storage to hold unacknowledged packets that might need to be resent and it requires time for the resends.
</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">If you have time critical processes with fine granularity, then you have to budget in the memory and
time to achieve that reliability.</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">The other thing is that some high reliability approaches do not scale well. A bit error rate of 1E-9
would be outstandingly good on a radio link, but if you’re sending data at 1Gbps, that’s an error every second. If you’re sending data at 1 Tbps, that’s an error every millisecond. If your packets are a millisecond long, then you get no successful packets
through.</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Fortunately, “over the wire” error rates are much better (1E-12 wouldn’t be unusual), but that’s still
a once every 20 minutes sort of error rate at 1 Gbps. And if you’re talking about interprocessor interconnects.. a 12x Infiniband is about 300 Gb/sec. A 1E-12 error rate would be an error every few seconds. At those rates, of course you’re going to use
some sort of forward error correction (you’re not going to depend on retry), and that might be an effective data rate reduction of 30-50% (depending on the coding).</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d">Jim Lux</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><a href="tel:%28818%29354-2075" target="_blank">(818)354-2075</a> (office)</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"><a href="tel:%28818%29395-2714" target="_blank">(818)395-2714</a> (cell)</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri",sans-serif"> Justin Y. Shi [mailto:<a href="mailto:shi@temple.edu" target="_blank">shi@temple.edu</a>]
<br>
<b>Sent:</b> Thursday, March 10, 2016 1:14 PM<br>
<b>To:</b> Douglas Eadline <<a href="mailto:deadline@eadline.org" target="_blank">deadline@eadline.org</a>><br>
<b>Cc:</b> Lux, Jim (337C) <<a href="mailto:james.p.lux@jpl.nasa.gov" target="_blank">james.p.lux@jpl.nasa.gov</a>>;
<a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a><br>
<b>Subject:</b> Re: [Beowulf] MPI, fault handling, etc.</span><u></u><u></u></p>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">Not that fast though. 100% reliability is practically achievable. And we are enjoying the results everyday. I mean the wireless and wired packet switching networks.<u></u><u></u></p>
<div>
<div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">The problem is our tendency of drawing fast conclusions. The one human made architecture that defies the "curse" of component failure is the statistic multiplexing principle (or
packet switching).<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">It has proven to work perfectly using growing number of not-so-reliable devises without suffering the scalability dilemma. We should learn how to apply that technology to extreme
scale computing. To this day, the full extends of protocol logics still cannot be adequately described formally on paper. But they work well, if done right.<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Justin<u></u><u></u></p>
</div>
</div>
</div>
</div>
<div>
<div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">On Thu, Mar 10, 2016 at 3:44 PM, Douglas Eadline <<a href="mailto:deadline@eadline.org" target="_blank">deadline@eadline.org</a>> wrote:<u></u><u></u></p>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0in;margin-bottom:5.0pt">
<p class="MsoNormal"><br>
> I will supper C's "hater" listing effort just to keep a spot light on the<br>
> important subject.<br>
><br>
> The question is not MPI is efficient or not. Fundamentally, all<br>
> electronics<br>
> will fail in unexpected ways. Bare metal computing was important decades<br>
> ago but detrimental to large scale computing. It is simply flawed for<br>
> extreme scale computing.<br>
><br>
> The Alan Fekete, Nancy Lynch, John Spinneli's impossible proof is the<br>
> fundamental "line in the sand" that cannot be crossed.<br>
><br>
> The corollary of that proof is that it is impossible to detect failure<br>
> reliably either. Therefore, those efforts for for runtime<br>
> detection/repair/reschedule are also flawed for extreme scale computing.<br>
><br>
<br>
Well on that note, I suppose we should just call it day.<br>
Although, some thought Godel would put the whole math thing<br>
out of business as well.<br>
<br>
--<br>
Doug<u></u><u></u></p>
<div>
<div>
<p class="MsoNormal"><br>
<br>
<br>
<br>
> Justin<br>
><br>
> On Thu, Mar 10, 2016 at 8:44 AM, Lux, Jim (337C)<br>
> <<a href="mailto:james.p.lux@jpl.nasa.gov" target="_blank">james.p.lux@jpl.nasa.gov</a>><br>
> wrote:<br>
><br>
>> This is interesting stuff.<br>
>> Think back a few years when we were talking about checkpoint/restart<br>
>> issues: as the scale of your problem gets bigger, the time to checkpoint<br>
>> becomes bigger than the time actually doing useful work.<br>
>> And, of course, the reason we do checkpoint/restart is because it’s<br>
>> bare-metal and easy. Just like simple message passing is “close to<br>
>> the<br>
>> metal” and “straightforward”.<br>
>><br>
>> Similarly, there’s “fine grained” error detection and correction:<br>
>> ECC<br>
>> codes in memory; redundant comm links or retries. Each of them imposes<br>
>> some speed/performance penalty (it takes some non-zero time to compute<br>
>> the<br>
>> syndrome bits in a ECC, and some non-zero time to fix the errored<br>
>> bits… in<br>
>> a lot of systems these days, that might be buried in a pipeline, but the<br>
>> delay is there, and affects performance)<br>
>><br>
>> I think of ECC as a sort of diffuse fault management: it’s pervasive,<br>
>> uniform, and the performance penalty is applied evenly through the<br>
>> system.<br>
>> Redundant (in the TMR sense) links are the same way.<br>
>><br>
>> Retries are a bit different. The “detecting” a fault is diffuse and<br>
>> pervasive (e.g. CRC checks occur on each message), but the correction of<br>
>> the fault is discrete and consumes resources at that time. In a system<br>
>> with tight time coupling (a pipelined systolic array would be the sort<br>
>> of<br>
>> worst case), many nodes have to wait to fix the one that failed.<br>
>><br>
>> A lot depends on the application: tighter time coupling is worse than<br>
>> embarrassingly parallel (which is what a lot of the “big data” stuff<br>
>> is:<br>
>> fundamentally EP, scatter the requests, run in parallel, gather the<br>
>> results).<br>
>><br>
>> The challenge is doing stuff in between: You may have a flock with<br>
>> excess<br>
>> capacity (just as ECC memory might have 1.5N physical storage bits to be<br>
>> used to store N bits), but how do you automatically distribute the<br>
>> resources to be failure tolerant. The original post in the thread<br>
>> points<br>
>> out that MPI is not a particularly facile tool for doing this. But<br>
>> I’m not<br>
>> sure that there is a tool, and I’m not sure that MPI is the root of<br>
>> the<br>
>> lack of tools. I think it’s that moving from close to the metal is<br>
>> a<br>
>> “hard problem” to do in a generic way. (The issues about 32 bit<br>
>> counts are<br>
>> valid, though)<br>
>><br>
>><br>
>> James Lux, P.E.<br>
>><br>
>> Task Manager, DHFR Space Testbed<br>
>><br>
>> Jet Propulsion Laboratory<br>
>><br>
>> 4800 Oak Grove Drive, MS 161-213<br>
>><br>
>> Pasadena CA 91109<br>
>><br>
>> <a href="tel:%2B1%28818%29354-2075" target="_blank">+1(818)354-2075</a><br>
>><br>
>> <a href="tel:%2B1%28818%29395-2714" target="_blank">+1(818)395-2714</a> (cell)<br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
>> To change your subscription (digest mode or unsubscribe) visit<br>
>> <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
>><br>
>><br>
><u></u><u></u></p>
</div>
</div>
<p class="MsoNormal">> --<br>
> Mailscanner: Clean<u></u><u></u></p>
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">><br>
> _______________________________________________<br>
> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
> To change your subscription (digest mode or unsubscribe) visit<br>
> <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
><u></u><u></u></p>
</div>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span style="color:#888888">--<br>
Doug<br>
<br>
--<br>
Mailscanner: Clean</span><u></u><u></u></p>
</blockquote>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div></div></div>
</div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>