[Beowulf] [OT] MPI-haters

Justin Y. Shi shi at temple.edu
Fri Mar 4 07:05:21 PST 2016


Thank you for creating the list. I have subscribed.

Justin

On Fri, Mar 4, 2016 at 5:43 AM, C Bergström <cbergstrom at pathscale.com>
wrote:

> Sorry for the shameless self indulgence, but there seems to be a
> growing trend of love/hate around MPI. I'll leave my opinions aside,
> but at the same time I'd love connect and host a list where others who
> are passionate about scalability can vent and openly discuss ideas.
>
> Despite the comical name, I've created mpi-haters mailing list
> http://lists.pathscale.com/mailman/listinfo/mpi-haters_lists.pathscale.com
>
> To start things off - Some of the ideas I've been privately bouncing around
>
> Can current directive based approaches (OMP/ACC) be extended to scale
> out. (I've seen some research out of Japan on this or similar)
>
> Is Chapel c-like syntax similar enough to easily implement in clang
>
> Can one low level library succeed at creating a clean interface across
> all popular industry interconnects (libfabrics vs UCX)
>
> Real world success or failure of "exascale" runtimes? (What's your
> experience - lets not pull any punches)
>
> I won't claim to see ridiculous scalability in most web applications
> I've worked on, but they had so many tools available - Why have I
> never heard of memcache being used in a supercomputer and or why isn't
> sharding ever mentioned...
>
> Everyone is welcome and lets keep it positive and fun - invite your friends
>
>
> ./C
>
> ps - Apologies if you get this message more than once.
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20160304/e28fef02/attachment.html>


More information about the Beowulf mailing list