[Beowulf] Working for DUG, new thead

John Hearns hearnsj at googlemail.com
Tue Jun 19 21:07:47 PDT 2018


Jim Lux wrote:

> I've been intrigued recently about using GPUs for signal processing kinds
of things.. There's not much difference between calculating vertices of
triangles and doing FIR filters.

Rather than look at hardware per se, how about learning about the Julia
language for this task?
I was discussing signal processing with someone who works with hearing
aids, they code in Julia. I sadly missed his talk at the Meetup in
Eindhoven.

https://discourse.julialang.org/c/domain/dsp


More on topic, I am not sure how well Julia is suited to Xeon Phi at the
moment. Thread support in Julia is still developing
https://docs.julialang.org/en/latest/base/multi-threading/
It would be interesting to see if Julia will run on Xeon Phi. Maybe a
certain geophysics company could have codes written in one language which
would do the heavy duty processing and the visualization too.




On 20 June 2018 at 05:57, John Hearns <hearnsj at googlemail.com> wrote:

> This thread is going fast!
>
> Prentice Bisbal wrote:
> > I often wonder if that misleading marketing is one of the reasons why
> the Xeon Phi has already been canned. I know a lot of people who were
> excited for the Xeon Phi, but > I don't know any who ever bought the Xeon
> Phis once they came out.
>
> In the UK at my last company we had a customer in the defence sector who
> bought lots of Xeon Phi. Great guy, full of enthusiasm and good to work
> with (Hello Kirk!)
> They were installed with IBM Platform before I joined the company. I
> re-installed the cluster with Bright which brought it up to date.
> That is the cluster which used Teradici PCOIP to connect via secure fibre
> optic links.
>
>
>
>
>
>
>
>
> On 20 June 2018 at 04:49, Stu Midgley <sdm900 at gmail.com> wrote:
>
>> we initially used them as standalone systems (ie. rsh a code onto them
>> and run it)
>>
>> today we use them in offload mode (ie. the host would push
>> memory+commands onto them and pull the results off - all via pragmas ).
>>
>> our last KNC systems were 2RU with 8x7120 phi's... which is a 2.1kW
>> system.  They absolutely fly...
>>
>>
>> On Wed, Jun 20, 2018 at 5:48 AM Ryan Novosielski <novosirj at rutgers.edu>
>> wrote:
>>
>>> We bought KNC a long time ago and keep meaning to get them to a place
>>> where they can be used and just haven’t. Do you mount filesystems from
>>> them? We have GPFS storage, primarily, and would have to re-export it via
>>> NFS I suppose if we want the cards to use that storage. I’ve seen
>>> complaints about the stability of that setup. I didn’t try to build the
>>> GPFS portability layer for Phi — not sure whether to think it would or
>>> wouldn’t work (I guess I’d be inclined to doubt it).
>>>
>>> > On Jun 14, 2018, at 12:02 AM, Stu Midgley <sdm900 at gmail.com> wrote:
>>> >
>>> > Phi is dead... Long live phi...
>>> >
>>> > By which I mean, while the Phi as a chip is going away, its concepts
>>> live on.  Massive number of cores, large vectorisation and high speed
>>> memory (and fucking high heat load - we do ~350W/socket).  So, while the
>>> product code will disappear, phi lives on.
>>> >
>>> > For KNC I did a lot of customisation to MPSS to get it to work... and
>>> we haven't been able to shift from one of the very early version.  We love
>>> the KNC... we get 8 in 2RU which is awesome density (1.1kW/RU)
>>> >
>>> > For KNL its just x86 with a big vectorisation unit (700W/RU).
>>> >
>>> > In both cases you have to be very very careful how you manage memory.
>>> >
>>> >
>>> >
>>> > On Thu, Jun 14, 2018 at 10:33 AM Joe Landman <joe.landman at gmail.com>
>>> wrote:
>>> > I'm curious about your next gen plans, given Phi's roadmap.
>>> >
>>> > On 6/13/18 9:17 PM, Stu Midgley wrote:
>>> >> low level HPC means... lots of things.  BUT we are a huge Xeon Phi
>>> shop and need low-level programmers ie. avx512, careful cache/memory
>>> management (NOT openmp/compiler vectorisation etc).
>>> >
>>> > I played around with avx512 in my rzf code.
>>> https://github.com/joelandman/rzf/blob/master/avx2/rzf_avx512.c  .
>>> Never really spent a great deal of time on it, other than noting that using
>>> avx512 seemed to downclock the core a bit on Skylake.
>>> >
>>> > Which dev/toolchain are you using for Phi?  I set up the MPSS bit for
>>> a customer, and it was pretty bad (2.6.32 kernel, etc.).  Flaky control
>>> plane, and a painful host->coprocessor interface.  Did you develop your
>>> own?  Definitely curious.
>>> >
>>> >
>>> >>
>>> >>
>>> >>
>>> >> On Thu, Jun 14, 2018 at 1:08 AM Jonathan Engwall <
>>> engwalljonathanthereal at gmail.com> wrote:
>>> >> John Hearne wrote:
>>> >> > Stuart Midgley works for DUG?  They are currently
>>> >> > recruiting for an HPC manager in London... Interesting...
>>> >>
>>> >> Recruitment at DUG wants to call me about Low Level HPC. I have at
>>> least until 6pm.
>>> >> I am excited but also terrified. My background is C and now
>>> JavaScript, mostly online course work and telnet MUDs.
>>> >> Any suggestions are very much needed.
>>> >> What must a "low level HPC" know on day 1???
>>> >> Jonathan Engwall
>>> >> engwalljonathanthereal at gmail.com
>>> >>
>>> >> _______________________________________________
>>> >> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
>>> Computing
>>> >> To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>> >>
>>> >>
>>> >> --
>>> >> Dr Stuart Midgley
>>> >> sdm900 at gmail.com
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Beowulf mailing list,
>>> >> Beowulf at beowulf.org
>>> >>  sponsored by Penguin Computing
>>> >> To change your subscription (digest mode or unsubscribe) visit
>>> >> http://www.beowulf.org/mailman/listinfo/beowulf
>>> >
>>> > --
>>> > Joe Landman
>>> > e:
>>> > joe.landman at gmail.com
>>> >
>>> > t: @hpcjoe
>>> > c: +1 734 612 4615
>>> > w:
>>> > https://scalability.org
>>> >
>>> > g:
>>> > https://github.com/joelandman
>>> >
>>> > l:
>>> > https://www.linkedin.com/in/joelandman
>>> > _______________________________________________
>>> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
>>> Computing
>>> > To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>> >
>>> >
>>> > --
>>> > Dr Stuart Midgley
>>> > sdm900 at gmail.com
>>> > _______________________________________________
>>> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
>>> Computing
>>> > To change your subscription (digest mode or unsubscribe) visit
>>> https://na01.safelinks.protection.outlook.com/?url=http%3A%
>>> 2F%2Fwww.beowulf.org%2Fmailman%2Flistinfo%2Fbeowulf&data=02%
>>> 7C01%7Cnovosirj%40rutgers.edu%7C89d9a1fe40cd40448a5708d5d1ab
>>> c4d9%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C0%7C6366454580
>>> 49748846&sdata=dEUacidlV69%2FM8NEdObFNmSOsOObZpPAF4NlfI7joTw
>>> %3D&reserved=0
>>>
>>> --
>>> ____
>>> || \\UTGERS,     |---------------------------*
>>> O*---------------------------
>>> ||_// the State  |         Ryan Novosielski - novosirj at rutgers.edu
>>> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS
>>> Campus
>>> ||  \\    of NJ  | Office of Advanced Research Computing - MSB C630,
>>> Newark
>>>      `'
>>>
>>>
>>
>> --
>> Dr Stuart Midgley
>> sdm900 at gmail.com
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180620/26816a15/attachment-0001.html>


More information about the Beowulf mailing list