<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Next to your cane. <br>
</p>
<div class="moz-cite-prefix">On 3/14/19 5:52 PM, Jeffrey Layton
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAJfzO5QeyQ+tePfKy0ST+fidtUzg9tPcy2nafffE23Nk3_Lt=g@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="auto">Damn. I knew I forgot something. Now where are my
glasses.
<div dir="auto"><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Thu, Mar 14, 2019, 17:17
Douglas Eadline <<a href="mailto:deadline@eadline.org"
moz-do-not-send="true">deadline@eadline.org</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
> I don't want to interrupt the flow but I'M feeling
cheeky. One word can<br>
> solve everything "Fortran". There I said it.<br>
<br>
Of course, but you forgot "now get off my lawn"<br>
<br>
--<br>
Doug<br>
<br>
><br>
> Jeff<br>
><br>
><br>
> On Thu, Mar 14, 2019, 17:03 Douglas Eadline <<a
href="mailto:deadline@eadline.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">deadline@eadline.org</a>>
wrote:<br>
><br>
>><br>
>> > Then given we are reaching these limitations how
come we don’t<br>
>> integrate<br>
>> > certain things from the HPC world into every day
computing so to<br>
>> speak.<br>
>><br>
>> Scalable/parallel computing is hard and hard costs
time and money.<br>
>> In HPC the performance often justifies the means, in
other<br>
>> sectors the cost must justify the means.<br>
>><br>
>> HPC has traditionally trickled down in to other
sectors. However,<br>
>> many or the HPC problem types are not traditional
computing<br>
>> problems. This situation is changing a bit with
things<br>
>> like Hadoop/Spark/Tensor Flow<br>
>><br>
>> --<br>
>> Doug<br>
>><br>
>><br>
>> ><br>
>> > On 14/03/2019, 19:14, "Douglas Eadline" <<a
href="mailto:deadline@eadline.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">deadline@eadline.org</a>><br>
>> wrote:<br>
>> ><br>
>> ><br>
>> > > Hi Douglas,<br>
>> > ><br>
>> > > Isnt there quantum computing being
developed in terms of CPUs at<br>
>> > this<br>
>> > > point?<br>
>> ><br>
>> > QC is (theoretically) unreasonably good at
some things at other<br>
>> > there may me classic algorithms that work
better. As far as I<br>
>> know,<br>
>> > there has been no demonstration of "quantum<br>
>> > supremacy" where a quantum computer is shown<br>
>> > to be faster than a classical algorithm.<br>
>> ><br>
>> > Getting there, not there yet.<br>
>> ><br>
>> > BTW, if you want to know what is going on
with QC<br>
>> > read Scott Aaronson's blog<br>
>> ><br>
>> > <a
href="https://www.scottaaronson.com/blog/" rel="noreferrer
noreferrer" target="_blank" moz-do-not-send="true">https://www.scottaaronson.com/blog/</a><br>
>> ><br>
>> > I usually get through the first few
paragraphs and<br>
>> > then whoosh over my scientific pay grade<br>
>> ><br>
>> ><br>
>> > > Also is it really about the speed any
more rather then how<br>
>> > > optimized the code is to take advantage
of the multiple cores<br>
>> that<br>
>> > a<br>
>> > > system has?<br>
>> ><br>
>> > That is because the clock rate increase
slowed to a crawl.<br>
>> > Adding cores was a way to "offer" more
performance, but introduced<br>
>> > the "multi-core tax." That is, programing
for multi-core is<br>
>> > harder and costlier than a single core.
Also, much<br>
>> > harder to optimize. In HPC we are lucky, we
are used to<br>
>> > designing MPI codes that scale with more
cores (no mater<br>
>> > where they live, same die, next socket,
another server).<br>
>> ><br>
>> > Also, more cores usually means lower single
core<br>
>> > frequency to fit into a given power envelope
(die shrinks help<br>
>> > with this but based on everything I have
read, we are about<br>
>> > at the end of the line) It also means lower
absolute memory<br>
>> > BW per core although more memory channels
help a bit.<br>
>> ><br>
>> > --<br>
>> > Doug<br>
>> ><br>
>> ><br>
>> > ><br>
>> > > On 13/03/2019, 22:22, "Douglas
Eadline" <<br>
>> <a href="mailto:deadline@eadline.org"
target="_blank" rel="noreferrer" moz-do-not-send="true">deadline@eadline.org</a>><br>
>> > wrote:<br>
>> > ><br>
>> > ><br>
>> > > I realize it is bad form to reply
ones own post and<br>
>> > > I forgot to mention something.<br>
>> > ><br>
>> > > Basically the HW performance parade
is getting harder<br>
>> > > to celebrate. Clock frequencies
have been slowly<br>
>> > > increasing while cores are multiply
rather quickly.<br>
>> > > Single core performance boosts are
mostly coming<br>
>> > > from accelerators. Added to the
fact that speculation<br>
>> > > technology when managed for
security, slows things down.<br>
>> > ><br>
>> > > What this means, the focus on
software performance<br>
>> > > and optimization is going to
increase because we can just<br>
>> > > buy new hardware and improve things
anymore.<br>
>> > ><br>
>> > > I believe languages like Julia can
help with this situation.<br>
>> > > For a while.<br>
>> > ><br>
>> > > --<br>
>> > > Doug<br>
>> > ><br>
>> > > >> Hi All,<br>
>> > > >> Basically I have sat down
with my colleague and we have<br>
>> opted<br>
>> > to go<br>
>> > > down<br>
>> > > > the route of Julia with
JuliaDB for this project. But here<br>
>> is<br>
>> > an<br>
>> > > > interesting thought that I
have been pondering if Julia is<br>
>> an<br>
>> > up<br>
>> > > and<br>
>> > > > coming fast language to work
with for large amounts of<br>
>> data<br>
>> > how<br>
>> > > will<br>
>> > > > that<br>
>> > > >> affect HPC and the way it
is currently used and HPC<br>
>> systems<br>
>> > > created?<br>
>> > > ><br>
>> > > ><br>
>> > > > First, IMO good choice.<br>
>> > > ><br>
>> > > > Second a short list of actual
conversations.<br>
>> > > ><br>
>> > > > 1) "This code is written in
Fortran." I have been met with<br>
>> > > > puzzling looks when I say the
the word "Fortran." Then it<br>
>> > > > comes, "... ancient language,
why not port to modern ..."<br>
>> > > > If you are asking that
question young Padawan you have<br>
>> > > > much to learn, maybe try web
pages"<br>
>> > > ><br>
>> > > > 2) I'll just use Python
because it works on my Laptop.<br>
>> > > > Later, "It will just run
faster on a cluster, right?"<br>
>> > > > and "My little Python program
is now kind-of big and has<br>
>> > > > become slow, should I use
TensorFlow?"<br>
>> > > ><br>
>> > > > 3) <mcoy><br>
>> > > > "Dammit Jim, I don't want to
learn/write Fortran,C,C++ and<br>
>> > MPI.<br>
>> > > > I'm a (fill in domain
specific scientific/technical<br>
>> > position)"<br>
>> > > > </mcoy><br>
>> > > ><br>
>> > > > My reply,"I agree and wish
there was a better answer to<br>
>> that<br>
>> > > question.<br>
>> > > > The computing industry has
made great strides in HW with<br>
>> > > > multi-core, clusters etc.
Software tools have always<br>
>> lagged<br>
>> > > > hardware. In the case of HPC
it is a slow process and<br>
>> > > > in HPC the whole programming
"thing" is not as "easy" as<br>
>> > > > it is in other sectors, warp
drives and transporters<br>
>> > > > take a little extra effort.<br>
>> > > ><br>
>> > > > 4) Then I suggest Julia, "I
invite you to try Julia. It is<br>
>> > > > easy to get started, fast, and
can grow with you<br>
>> > application."<br>
>> > > > Then I might say, "In a way it
is HPC BASIC, it you are<br>
>> old<br>
>> > > > enough you will understand
what I mean by that."<br>
>> > > ><br>
>> > > > The question with languages
like Julia (or Chapel, etc)<br>
>> is:<br>
>> > > ><br>
>> > > > "How much performance are
you willing to give up for<br>
>> > > convenience?"<br>
>> > > ><br>
>> > > > The goal is to keep the
programmer close to the problem at<br>
>> > hand<br>
>> > > > and away from the nuances of
the underlying hardware.<br>
>> > Obviously<br>
>> > > > the more performance needed,
the closer you need to get to<br>
>> > the<br>
>> > > hardware.<br>
>> > > > This decision goes beyond
software tools, there are all<br>
>> kinds<br>
>> > > > of cost/benefits that need to
be considered. And, then<br>
>> there<br>
>> > > > is IO ...<br>
>> > > ><br>
>> > > > --<br>
>> > > > Doug<br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > >> Regards,<br>
>> > > >> Jonathan<br>
>> > > >> -----Original Message-----<br>
>> > > >> From: Beowulf <<a
href="mailto:beowulf-bounces@beowulf.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">beowulf-bounces@beowulf.org</a>>
On Behalf Of<br>
>> > Michael<br>
>> > > Di<br>
>> > > > Domenico<br>
>> > > >> Sent: 04 March 2019 17:39<br>
>> > > >> Cc: Beowulf Mailing List
<<a href="mailto:beowulf@beowulf.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">beowulf@beowulf.org</a>><br>
>> > > >> Subject: Re: [Beowulf]
Large amounts of data to store and<br>
>> > process<br>
>> > > On<br>
>> > > > Mon, Mar 4, 2019 at 8:18 AM
Jonathan Aquilina<br>
>> > > > <<a
href="mailto:jaquilina@eagleeyet.net" target="_blank"
rel="noreferrer" moz-do-not-send="true">jaquilina@eagleeyet.net</a>><br>
>> > > >> wrote:<br>
>> > > >>> As previously
mentioned we<br>
>> > don’t<br>
>> really need to have<br>
>> > > anything<br>
>> > > >>> indexed<br>
>> > > > so I am thinking flat files
are the way to go my only<br>
>> concern<br>
>> > is<br>
>> > > the<br>
>> > > > performance of large flat
files.<br>
>> > > >> potentially, there are
many factors in the work flow that<br>
>> > > ultimately<br>
>> > > > influence the decision as
others have pointed out. my<br>
>> flat<br>
>> > file<br>
>> > > example<br>
>> > > > is only one, where we just
repeatable blow through the<br>
>> files.<br>
>> > > >>> Isnt that what HDFS is
for to deal with large flat<br>
>> files.<br>
>> > > >> large is relative. 256GB
file isn't "large" anymore.<br>
>> i've<br>
>> > pushed<br>
>> > > TB<br>
>> > > > files through hadoop and run
the terabyte sort benchmark,<br>
>> and<br>
>> > yes it<br>
>> > > can<br>
>> > > > be done in minutes
(time-scale), but you need an<br>
>> astounding<br>
>> > amount<br>
>> > > of<br>
>> > > > hardware to do it (the last
benchmark paper i saw, it was<br>
>> > something<br>
>> > > 1000<br>
>> > > > nodes). you can accomplish
the same feat using less and<br>
>> less<br>
>> > > > complicated hardware/software<br>
>> > > >> and if your dev's are
willing to adapt to the hadoop<br>
>> > ecosystem, you<br>
>> > > sunk<br>
>> > > > right off the dock.<br>
>> > > >> to get a more targeted
answer from the numerous smart<br>
>> people<br>
>> > on<br>
>> > > the<br>
>> > > > list,<br>
>> > > >> you'd need to open up the
app and workflow to us.<br>
>> there's<br>
>> > just too<br>
>> > > many<br>
>> > > > variables
_______________________________________________<br>
>> > > >> Beowulf mailing list, <a
href="mailto:Beowulf@beowulf.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">Beowulf@beowulf.org</a>
sponsored by<br>
>> > Penguin<br>
>> > > Computing<br>
>> > > > To change your subscription
(digest mode or unsubscribe)<br>
>> > visit<br>
>> > > >> <a
href="http://www.beowulf.org/mailman/listinfo/beowulf"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
>> > > >>
_______________________________________________<br>
>> > > >> Beowulf mailing list, <a
href="mailto:Beowulf@beowulf.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">Beowulf@beowulf.org</a>
sponsored by<br>
>> > Penguin<br>
>> > > Computing<br>
>> > > > To change your subscription
(digest mode or unsubscribe)<br>
>> > visit<br>
>> > > >> <a
href="http://www.beowulf.org/mailman/listinfo/beowulf"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
>> > > ><br>
>> > > ><br>
>> > > > --<br>
>> > > > Doug<br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > ><br>
>> > > >
_______________________________________________<br>
>> > > > Beowulf mailing list, <a
href="mailto:Beowulf@beowulf.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">Beowulf@beowulf.org</a>
sponsored by<br>
>> > Penguin<br>
>> > > Computing<br>
>> > > > To change your subscription
(digest mode or unsubscribe)<br>
>> > visit<br>
>> > > > <a
href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
>> > > ><br>
>> > ><br>
>> > ><br>
>> > > --<br>
>> > > Doug<br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Doug<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>><br>
>><br>
>> --<br>
>> Doug<br>
>><br>
>> _______________________________________________<br>
>> Beowulf mailing list, <a
href="mailto:Beowulf@beowulf.org" target="_blank"
rel="noreferrer" moz-do-not-send="true">Beowulf@beowulf.org</a>
sponsored by Penguin Computing<br>
>> To change your subscription (digest mode or
unsubscribe) visit<br>
>> <a
href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf"
rel="noreferrer noreferrer" target="_blank"
moz-do-not-send="true">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
>><br>
><br>
<br>
<br>
-- <br>
Doug<br>
<br>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
Beowulf mailing list, <a class="moz-txt-link-abbreviated" href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a> sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit <a class="moz-txt-link-freetext" href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a>
</pre>
</blockquote>
<pre class="moz-signature" cols="72">--
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
<a class="moz-txt-link-freetext" href="https://www.pppl.gov">https://www.pppl.gov</a></pre>
</body>
</html>