[Beowulf] ***UNCHECKED*** Re: Spark, Julia, OpenMPI etc. - all in one place
Michael Di Domenico
mdidomenico4 at gmail.com
Wed Oct 14 05:42:07 PDT 2020
On Wed, Oct 14, 2020 at 8:02 AM Oddo Da <oddodaoddo at gmail.com> wrote:
>
> Thank you. I wrote distributed/parallel code in those times where PVM and MPI were competing frameworks - I had the privilege to see things transition from "big iron" multi-cpu machines from Sun, Dec etc. to beowulf clusters and commodity hardware. At the time, things like MPI seemed like real God-sends. Don't get me wrong, I am not criticizing MPI, just wondering why nothing has come along to provide a higher level of abstraction (with MPI underneath if necessary). Folks like Doug talk about the lack of financial incentive but we are in academia and I am curious why nobody came along and just did it as a research project, for example, as opposed to the motivation of a potential commercial payoff down the road. I also spent time in the industry starting in 2012 ("big data", how I dislike this phrase but for the lack of better...) - things like Spark evolved in parallel with functional languages like Scala, so at least you see some kind of progression towards more verifiable code, code you can reason more about, lazy evaluation, so on and so on. Meanwhile, in traditional HPC we are still where we were 20 years ago and the same books on MPI apply. I understand _why_ things like Spark evolved separately and differently (a company generally does not have the luxury of an HPC cluster with a pay-for parallel filesystem but they may have some machines on the ethernet they can put together in a logical "cluster") and I am not saying we need the same thing in HPC, I am just curious about (what I perceive as) the lack of progress on the HPC side.
i think you're implying (perhaps not consciously or i'm reading more
into your statements) that MPI/PVM are the only frame works for
message passing out there. this isn't true, charm, upc, shmem, etc
have all been developed to do basically what MPI does. MPI (i
believe) is just the only one that's ratified into a standard. and
thus it provides the code stability interconnect vendors need to write
the shim code between the hardware and the language
i think your "why" spark evolved separately is flawed. spark/hadoop
didn't evolve out of a company's lack of ability to pay for a cluster
or a filesystem. It was designed to solve a very specific problem.
bigdata processing. you could certainly write an mpi program to do
the same thing, but the design of spark/hadoop does it more
efficiently and at a lower cost. but the point is each language does
something specific really well, spark does data processing, mpi can do
complex math (yes no need to circle the flaming wagons, both can do
both, but each does one better than the other)
i believe your "lack of progress" statement is really just a
misunderstanding of what mpi really represents. to me MPI is like the
flathead screw, they've been around a long time and there are
certainly a ton of alternate head designs on the market. however,
wooden boat builders still use them because they just work, they're
easy to make, and they're easy to fix when it comes time to repair
you're equating a 20yr old book with lack of progress and frankly i
think that's a flawed statement.
More information about the Beowulf
mailing list