<div dir="ltr"><div dir="ltr">On Tue, Oct 13, 2020 at 9:55 AM Douglas Eadline <<a href="mailto:deadline@eadline.org">deadline@eadline.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>Spark is a completely separate code base that has its own Map Reduce<br>
engine. It can work stand-alone, with the YARN scheduler, or with<br>
other schedulers. It can also take advantage of HDFS.<br></blockquote><div><br></div><div>Doug, this is correct. I think for all practical purposes Hadoop and Spark get lumped into the same bag because the underlying ideas are coming from the same place. A lot of people saw Spark (esp. at the beginning) as a much faster, in-memory Hadoop.<br></div></div></div>