<div dir="ltr"><div>Jonathan, damn good question.</div><div>There is a lot of debate at the moment on how 'traditional' HPC can co-exist with 'big data' style HPC.</div><div><br></div><div>Regarding Julia, I am a big fan of it and it bring a task-level paradigm to HPC work.</div><div>To be honest though, traditional Fortran codes will be with us forever. No-one is going to refactor say a weather forecasting model in a national centre.</div><div>Also Python has the mindset at the moment. I have seen people in my company enthusiastically taking up Python.</div><div>Not because of a measured choice after scanning dozens of learned papers and Reddit reviews etc.</div><div>If that was the case then they might opt for Go or some niche language.</div><div>No, the choice is made because their colleagues already use Python and pass on start up codes, and there is a huge Python community.</div><div><br></div><div>Same with traditional HPC codes really - we all know that batch scripts are passed on through the generations like Holy Books,</div><div>and most scientists dont have a clue what these scratches on clay tablets actually DO.</div><div>Leading people to continue to run batch jobs which are hard wired for 12 cores on a 20 core machine etc. etc.</div><div><br></div><div>(*) this is worthy of debate. In Formula 1 whenever we updated the version of our CFD code we re-ran a known simulation and made sure we still had correlation.</div><div>It is inevitable that old versions of codes will sop being supported</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div class="gmail_attr" dir="ltr">On Sun, 10 Mar 2019 at 09:29, Jonathan Aquilina <<a href="mailto:jaquilina@eagleeyet.net">jaquilina@eagleeyet.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">Hi All,<br>
<br>
Basically I have sat down with my colleague and we have opted to go down the route of Julia with JuliaDB for this project. But here is an interesting thought that I have been pondering if Julia is an up and coming fast language to work with for large amounts of data how will that affect HPC and the way it is currently used and HPC systems created?<br>
<br>
Regards,<br>
Jonathan<br>
<br>
-----Original Message-----<br>
From: Beowulf <<a href="mailto:beowulf-bounces@beowulf.org" target="_blank">beowulf-bounces@beowulf.org</a>> On Behalf Of Michael Di Domenico<br>
Sent: 04 March 2019 17:39<br>
Cc: Beowulf Mailing List <<a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a>><br>
Subject: Re: [Beowulf] Large amounts of data to store and process<br>
<br>
On Mon, Mar 4, 2019 at 8:18 AM Jonathan Aquilina <<a href="mailto:jaquilina@eagleeyet.net" target="_blank">jaquilina@eagleeyet.net</a>> wrote:<br>
><br>
> As previously mentioned we don’t really need to have anything indexed so I am thinking flat files are the way to go my only concern is the performance of large flat files.<br>
<br>
potentially, there are many factors in the work flow that ultimately influence the decision as others have pointed out. my flat file example is only one, where we just repeatable blow through the files.<br>
<br>
> Isnt that what HDFS is for to deal with large flat files.<br>
<br>
large is relative. 256GB file isn't "large" anymore. i've pushed TB files through hadoop and run the terabyte sort benchmark, and yes it can be done in minutes (time-scale), but you need an astounding amount of hardware to do it (the last benchmark paper i saw, it was something<br>
1000 nodes). you can accomplish the same feat using less and less complicated hardware/software<br>
<br>
and if your dev's are willing to adapt to the hadoop ecosystem, you sunk right off the dock.<br>
<br>
to get a more targeted answer from the numerous smart people on the list, you'd need to open up the app and workflow to us. there's just too many variables _______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank" rel="noreferrer">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank" rel="noreferrer">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div>