<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">As ever, good stuff from Doug, but I’ll just add a little more background.<div class=""><br class=""></div><div class="">When we standardised MPI-1 (I was in the room in Dallas for most of this :-)) we did not expect it still to be the dominant interface which users would be coding to 25 years later, rather we expected that MPI would form a reasonable basis for higher level interfaces to be built upon, and we hoped that it would provide enough performance and be rich enough semantically to allow that to happen.</div><div class="">Therefore our aim was not to make it a perfect, high-level, end-user interface, but rather to make it something which we (as implementers) knew how to implement efficiently while providing a reasonable, portable, vendor-neutral layer which would be usable either by end-user code, or by higher-level libraries (which could certainly include runtime libraries for higher level languages).</div><div class=""><br class=""></div><div class="">Maybe we made it too usable, so no-one bothered with the higher-level interfaces :-) (I still have the two competing tee-shirts, one criticising MPI for being too big and having too many functions in the interface [and opinion from PVM…], the other quoting Occam as a rebuttal “praeter necessitatem” :-))</div><div class=""><br class=""></div><div class="">Overall MPI succeeded way beyond our expectations, and, I think, we did a pretty good job. (MPI-1 was missing some things, like support for reliability, but that, at least, was an explicit decision, since, at the time, a cluster had maybe 64 nodes and was plugged into a single wall socket, and we wanted to get the standard out on time!)</div><div class=""><br class=""></div><div class="">-- Jim<br class="">James Cownie <<a href="mailto:jcownie@gmail.com" class="">jcownie@gmail.com</a>><br class="">Mob: +44 780 637 7146<br class=""></div><div class=""><br class=""></div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 13 Oct 2020, at 22:03, Douglas Eadline <<a href="mailto:deadline@eadline.org" class="">deadline@eadline.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class=""><br class=""><blockquote type="cite" class="">On Tue, Oct 13, 2020 at 3:54 PM Douglas Eadline <<a href="mailto:deadline@eadline.org" class="">deadline@eadline.org</a>><br class="">wrote:<br class=""><br class=""><blockquote type="cite" class=""><br class="">It really depends on what you need to do with Hadoop or Spark.<br class="">IMO many organizations don't have enough data to justify<br class="">standing up a 16-24 node cluster system with a PB of HDFS.<br class=""><br class=""></blockquote><br class="">Excellent. If I understand what you are saying, there is simply no demand<br class="">to mix technologies, esp. in the academic world. OK. In your opinion and<br class="">independent of Spark/HDFS discussion, why are we still only on openMPI in<br class="">the world of writing distributed code on HPC clusters? Why is there<br class="">nothing<br class="">else gaining any significant traction? No innovation in exposing higher<br class="">level abstractions and hiding the details and making it easier to write<br class="">correct code that is easier to reason about and does not burden the writer<br class="">with too much of a low level detail. Is it just the amount of investment<br class="">in<br class="">an existing knowledge base? Is it that there is nothing out there to<br class="">compel<br class="">people to spend the time on it to learn it? Or is there nothing there? Or<br class="">maybe there is and I am just blissfully unaware? :)<br class=""><br class=""></blockquote><br class=""><br class="">I have been involved in HPC and parallel computing since the 1980's<br class="">Prior to MPI every vendor had a message passing library. Initially<br class="">PVM (Parallel Virtual Machine) from Oak Ridge was developed so there<br class="">would be some standard API to create parallel codes. It worked well<br class="">but needed more. MPI was developed so parallel hardware vendors<br class="">(not many back then) could standardize on a messaging framework<br class="">for HPC. Since then, not a lot has pushed the needle forward.<br class=""><br class="">Of course there are things like OpenMP, but these are not distributed<br class="">tools.<br class=""><br class="">Another issue the difference between "concurrent code" and<br class="">parallel execution. Not everything that is concurrent needs<br class="">to be executed in parallel and indeed, depending on<br class="">the hardware environment you are targeting, these decisions<br class="">may change. And, it is not something you can figure out by<br class="">looking at the code.<br class="">P<br class="">arallel computing is hard problem and no one has<br class="">really come up with a general purpose way to write software.<br class="">MPI works, however I still consider it a "parallel machine code"<br class="">that requires some careful programming.<br class=""><br class="">The good news is most of the popular HPC applications<br class="">have been ported and will run using MPI (as best as their algorithm<br class="">allows) So from an end user perspective, most everything<br class="">works. Of course there could be more applications ported<br class="">to MPI but it all depends. Maybe end users can get enough<br class="">performance with a CUDA version and some GPUs or an<br class="">OpenMP version on a 64-core server.<br class=""><br class="">Thus the incentive is not really there. There is no huge financial<br class="">push behind HPC software tools like there is with data analytics.<br class=""><br class="">Personally, I like Julia and believe it is the best new language<br class="">to enter technical computing. One of the issues it addresses is<br class="">the two language problem. The first cut of something is often written<br class="">in Python, then if it get to production and is slow and does<br class="">not have an easy parallel pathway (local multi-core or distributed)<br class="">Then the code is rewritten in C/C++ or Fortran with MPI, CUDA, OpenMP<br class=""><br class="">Julia is fast out the box and provides a growth path for<br class="">parallel growth. One version with no need to rewrite. Plus,<br class="">it has something called "multiple dispatch" that provides<br class="">unprecedented code flexibility and portability. (too long a<br class="">discussion for this email) Basically it keeps the end user closer<br class="">to their "problem" and further away from the hardware minutia.<br class=""><br class="">That is enough for now. I'm sure others have opinions worth<br class="">hearing.<br class=""><br class=""><br class="">--<br class="">Doug<br class=""><br class=""><br class=""><br class=""><blockquote type="cite" class="">Thanks!<br class=""><br class=""></blockquote><br class=""><br class="">-- <br class="">Doug<br class=""><br class="">_______________________________________________<br class="">Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class=""></div></div></blockquote></div><br class=""><div class="">
<div style="color: rgb(0, 0, 0); letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""><br class=""><br class=""></div></div>
</div>
<br class=""></div></body></html>