<div dir="ltr">Jim Lux wrote:<div>
<span class="gmail-im" style="color:rgb(80,0,80);font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial"><br></span><span style="font-size:12.8px;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">> I've been intrigued recently about using GPUs for signal processing kinds of things.. There's not much difference between calculating vertices of triangles and doing FIR filters.</span>
<br></div><div><br></div><div>Rather than look at hardware per se, how about learning about the Julia language for this task?</div><div>I was discussing signal processing with someone who works with hearing aids, they code in Julia. I sadly missed his talk at the Meetup in Eindhoven.</div><div><br></div><div><a href="https://discourse.julialang.org/c/domain/dsp">https://discourse.julialang.org/c/domain/dsp</a><br></div><div><br></div><div><br></div><div>More on topic, I am not sure how well Julia is suited to Xeon Phi at the moment. Thread support in Julia is still developing</div><div><a href="https://docs.julialang.org/en/latest/base/multi-threading/">https://docs.julialang.org/en/latest/base/multi-threading/</a><br></div><div>It would be interesting to see if Julia will run on Xeon Phi. Maybe a certain geophysics company could have codes written in one language which would do the heavy duty processing and the visualization too.</div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 20 June 2018 at 05:57, John Hearns <span dir="ltr"><<a href="mailto:hearnsj@googlemail.com" target="_blank">hearnsj@googlemail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">This thread is going fast!<span class=""><div><br></div><div>Prentice Bisbal wrote:</div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline">> I often wonder if that misleading marketing is one of the reasons why the Xeon Phi has already been canned. I know a lot of people who were excited for the Xeon Phi, but > I don't know any who ever bought the Xeon Phis once they came out.</span><br></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div></span><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline">In the UK at my last company we had a customer in the defence sector who bought lots of Xeon Phi. Great guy, full of enthusiasm and good to work with (Hello Kirk!)</span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline">They were installed with IBM Platform before I joined the company. I re-installed the cluster with Bright which brought it up to date.</span></div><div>That is the cluster which used Teradici PCOIP to connect via secure fibre optic links.</div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div><div><span style="float:none;background-color:transparent;font-size:12.8px;font-variant-numeric:normal;font-variant-east-asian:normal;text-align:left;text-decoration:none;display:inline"><br></span></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 20 June 2018 at 04:49, Stu Midgley <span dir="ltr"><<a href="mailto:sdm900@gmail.com" target="_blank">sdm900@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">we initially used them as standalone systems (ie. rsh a code onto them and run it)<div><br></div><div>today we use them in offload mode (ie. the host would push memory+commands onto them and pull the results off - all <span style="background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">via pragmas<span> </span></span>).</div><div><br></div><div>our last KNC systems were 2RU with 8x7120 phi's... which is a 2.1kW system. They absolutely fly...</div><div><br></div></div><div class="m_5951376239312026228HOEnZb"><div class="m_5951376239312026228h5"><br><div class="gmail_quote"><div dir="ltr">On Wed, Jun 20, 2018 at 5:48 AM Ryan Novosielski <<a href="mailto:novosirj@rutgers.edu" target="_blank">novosirj@rutgers.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We bought KNC a long time ago and keep meaning to get them to a place where they can be used and just haven’t. Do you mount filesystems from them? We have GPFS storage, primarily, and would have to re-export it via NFS I suppose if we want the cards to use that storage. I’ve seen complaints about the stability of that setup. I didn’t try to build the GPFS portability layer for Phi — not sure whether to think it would or wouldn’t work (I guess I’d be inclined to doubt it).<br>
<br>
> On Jun 14, 2018, at 12:02 AM, Stu Midgley <<a href="mailto:sdm900@gmail.com" target="_blank">sdm900@gmail.com</a>> wrote:<br>
> <br>
> Phi is dead... Long live phi...<br>
> <br>
> By which I mean, while the Phi as a chip is going away, its concepts live on. Massive number of cores, large vectorisation and high speed memory (and fucking high heat load - we do ~350W/socket). So, while the product code will disappear, phi lives on.<br>
> <br>
> For KNC I did a lot of customisation to MPSS to get it to work... and we haven't been able to shift from one of the very early version. We love the KNC... we get 8 in 2RU which is awesome density (1.1kW/RU)<br>
> <br>
> For KNL its just x86 with a big vectorisation unit (700W/RU).<br>
> <br>
> In both cases you have to be very very careful how you manage memory.<br>
> <br>
> <br>
> <br>
> On Thu, Jun 14, 2018 at 10:33 AM Joe Landman <<a href="mailto:joe.landman@gmail.com" target="_blank">joe.landman@gmail.com</a>> wrote:<br>
> I'm curious about your next gen plans, given Phi's roadmap.<br>
> <br>
> On 6/13/18 9:17 PM, Stu Midgley wrote:<br>
>> low level HPC means... lots of things. BUT we are a huge Xeon Phi shop and need low-level programmers ie. avx512, careful cache/memory management (NOT openmp/compiler vectorisation etc).<br>
> <br>
> I played around with avx512 in my rzf code. <a href="https://github.com/joelandman/rzf/blob/master/avx2/rzf_avx512.c" rel="noreferrer" target="_blank">https://github.com/joelandman/<wbr>rzf/blob/master/avx2/rzf_avx51<wbr>2.c</a> . Never really spent a great deal of time on it, other than noting that using avx512 seemed to downclock the core a bit on Skylake.<br>
> <br>
> Which dev/toolchain are you using for Phi? I set up the MPSS bit for a customer, and it was pretty bad (2.6.32 kernel, etc.). Flaky control plane, and a painful host->coprocessor interface. Did you develop your own? Definitely curious.<br>
> <br>
> <br>
>> <br>
>> <br>
>> <br>
>> On Thu, Jun 14, 2018 at 1:08 AM Jonathan Engwall <<a href="mailto:engwalljonathanthereal@gmail.com" target="_blank">engwalljonathanthereal@gmail.<wbr>com</a>> wrote:<br>
>> John Hearne wrote:<br>
>> > Stuart Midgley works for DUG? They are currently<br>
>> > recruiting for an HPC manager in London... Interesting...<br>
>> <br>
>> Recruitment at DUG wants to call me about Low Level HPC. I have at least until 6pm.<br>
>> I am excited but also terrified. My background is C and now JavaScript, mostly online course work and telnet MUDs.<br>
>> Any suggestions are very much needed.<br>
>> What must a "low level HPC" know on day 1???<br>
>> Jonathan Engwall<br>
>> <a href="mailto:engwalljonathanthereal@gmail.com" target="_blank">engwalljonathanthereal@gmail.c<wbr>om</a><br>
>> <br>
>> ______________________________<wbr>_________________<br>
>> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
>> To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
>> <br>
>> <br>
>> --<br>
>> Dr Stuart Midgley<br>
>> <a href="mailto:sdm900@gmail.com" target="_blank">sdm900@gmail.com</a><br>
>> <br>
>> <br>
>> ______________________________<wbr>_________________<br>
>> Beowulf mailing list,<br>
>> <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>
>> sponsored by Penguin Computing<br>
>> To change your subscription (digest mode or unsubscribe) visit<br>
>> <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
> <br>
> --<br>
> Joe Landman<br>
> e:<br>
> <a href="mailto:joe.landman@gmail.com" target="_blank">joe.landman@gmail.com</a><br>
> <br>
> t: @hpcjoe<br>
> c: +1 734 612 4615<br>
> w:<br>
> <a href="https://scalability.org" rel="noreferrer" target="_blank">https://scalability.org</a><br>
> <br>
> g:<br>
> <a href="https://github.com/joelandman" rel="noreferrer" target="_blank">https://github.com/joelandman</a><br>
> <br>
> l:<br>
> <a href="https://www.linkedin.com/in/joelandman" rel="noreferrer" target="_blank">https://www.linkedin.com/in/jo<wbr>elandman</a><br>
> ______________________________<wbr>_________________<br>
> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
> To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
> <br>
> <br>
> --<br>
> Dr Stuart Midgley<br>
> <a href="mailto:sdm900@gmail.com" target="_blank">sdm900@gmail.com</a><br>
> ______________________________<wbr>_________________<br>
> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
> To change your subscription (digest mode or unsubscribe) visit <a href="https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.beowulf.org%2Fmailman%2Flistinfo%2Fbeowulf&data=02%7C01%7Cnovosirj%40rutgers.edu%7C89d9a1fe40cd40448a5708d5d1abc4d9%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C636645458049748846&sdata=dEUacidlV69%2FM8NEdObFNmSOsOObZpPAF4NlfI7joTw%3D&reserved=0" rel="noreferrer" target="_blank">https://na01.safelinks.protect<wbr>ion.outlook.com/?url=http%3A%<wbr>2F%2Fwww.beowulf.org%2Fmailman<wbr>%2Flistinfo%2Fbeowulf&data=02%<wbr>7C01%7Cnovosirj%40rutgers.edu%<wbr>7C89d9a1fe40cd40448a5708d5d1ab<wbr>c4d9%7Cb92d2b234d35447093ff69a<wbr>ca6632ffe%7C1%7C0%7C6366454580<wbr>49748846&sdata=dEUacidlV69%2FM<wbr>8NEdObFNmSOsOObZpPAF4NlfI7joTw<wbr>%3D&reserved=0</a><br>
<br>
--<br>
____<br>
|| \\UTGERS, |---------------------------*<wbr>O*---------------------------<br>
||_// the State | Ryan Novosielski - <a href="mailto:novosirj@rutgers.edu" target="_blank">novosirj@rutgers.edu</a><br>
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus<br>
|| \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark<br>
`'<br>
<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="m_5951376239312026228m_-8628846199282790190gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Dr Stuart Midgley<br><a href="mailto:sdm900@gmail.com" target="_blank">sdm900@gmail.com</a></div></div>
</div></div><br>______________________________<wbr>_________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">http://www.beowulf.org/mailman<wbr>/listinfo/beowulf</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>