<div dir="ltr"><div dir="ltr">On Tue, Oct 13, 2020 at 8:33 AM Michael Di Domenico <<a href="mailto:mdidomenico4@gmail.com">mdidomenico4@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">i can't speak from a general industry sense, but i've had everything<br>
run through my center over the past 11 years. Hadoop seemed like<br>
something that was going to take off. it didn't with my group of<br>
users. we aren't counting clicks nor parsing text from huge files, so<br>
its utility to us faded. my understanding is the group behind hadoop<br>
also made several industry missteps when trying to commercialize, i'm<br>
not sure what happened after that. i think a lot people realized that<br>
hadoop made things easier, but the overhead was too high given the<br>
limited functionality most people wanted to use it for<br></blockquote><div><br></div><div>Michael, thank you for the insight. I think Hadoop in general is mostly dying, Spark is really the derivative that took off. Basically, what you are saying is that there is no demand on your infra for this kind of work. Do you have any insights as to why not? Do the AI/DS/ML guys just know that they cannot use your resources to run standard loads and go straight to the cloud or local ethernet clusters?</div><div><br></div><div>In your estimate, how many of your users write code in Julia vs MPI vs Python?<br></div></div></div>