<div dir="ltr"><div dir="ltr">On Tue, Oct 13, 2020 at 1:31 PM Douglas Eadline <<a href="mailto:deadline@eadline.org">deadline@eadline.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>The reality is almost all Analytics projects require multiple<br>
tools. For instance, Spark is great, but if you do some<br>
data munging of CSV files and want to store your results<br>
at scale you can't write a single file to your local file<br>
system. Often times you write it as a Hive table to HDFS<br>
(e.g. in Parquet format) so it is available for Hive SQL<br>
queries or for other tools to use.<br></blockquote><div><br></div><div>You can also commit to a database (but you can't have those running on a traditional HPC cluster). What would be nice would be HDFS running on a traditional cluster. But that would break the whole parallel filesystem exposed as a single mount point thing.... It is funny how these things evolved apart from each other to the point they are impossible to marry, no? <br></div></div></div>