[Beowulf] Working for DUG, new thead

Jonathan Engwall engwalljonathanthereal at gmail.com
Tue Jun 19 13:52:58 PDT 2018


I think the boundary between a final product and the start of a project
separates these two view points.
Lately, I short stacks of OReilleys scattered about, off libraries and a
second stack of notebooks filled with every command that really did work.
And I think it is fun.
Jonathan

On Jun 19, 2018 12:11 PM, "Joe Landman" <joe.landman at gmail.com> wrote:



On 6/19/18 2:47 PM, Prentice Bisbal wrote:

>
> On 06/13/2018 10:32 PM, Joe Landman wrote:
>
>>
>> I'm curious about your next gen plans, given Phi's roadmap.
>>
>>
>> On 6/13/18 9:17 PM, Stu Midgley wrote:
>>
>>> low level HPC means... lots of things.  BUT we are a huge Xeon Phi shop
>>> and need low-level programmers ie. avx512, careful cache/memory management
>>> (NOT openmp/compiler vectorisation etc).
>>>
>>
>> I played around with avx512 in my rzf code.
>> https://github.com/joelandman/rzf/blob/master/avx2/rzf_avx512.c .  Never
>> really spent a great deal of time on it, other than noting that using
>> avx512 seemed to downclock the core a bit on Skylake.
>>
>
> If you organize your code correctly, and call the compiler with the right
> optimization flags, shouldn't the compiler automatically handle a good
> portion of this 'low-level' stuff?
>

I wish it would do it well, but it turns out it doesn't do a good job.
You have to pay very careful attention to almost all aspects of making it
simple for the compiler, and then constraining the directions it takes with
code gen.

I explored this with my RZF stuff.  It turns out that with -O3, gcc (5.x
and 6.x) would convert a library call for the power function into an FP
instruction.  But it would use 1/8 - 1/4 of the XMM/YMM register width, not
automatically unroll loops, or leverage the vector nature of the problem.

Basically, not much has changed in 20+ years ... you annotate your code
with pragmas and similar, or use instruction primitives and give up on the
optimizer/code generator.

When it comes down to it, compilers aren't really as smart as many of us
would like.  Converting idiomatic code into efficient assembly isn't what
they are designed for.  Rather correct assembly.  Correct doesn't mean
efficient in many cases, and some of the less obvious optimizations that we
might think to be beneficial are not taken. We can hand modify the code for
this, and see if these optimizations are beneficial, but the compilers
often are not looking at a holistic problem.


I understand that hand-coding this stuff usually still give you the best
> performance (See GotoBLAS/OpenBLAS, for example), but does your average HPC
> programmer trying to get decent performance need to hand-code that stuff,
> too?
>

Generally, yes.  Optimizing serial code for GPUs doesn't work well.
Rewriting for GPUs (e.g. taking into account the GPU data/compute flow
architecture) does work well.


-- 

Joe Landman
e: joe.landman at gmail.com
t: @hpcjoe
w: https://scalability.org
g: https://github.com/joelandman
l: https://www.linkedin.com/in/joelandman

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180619/2edf5126/attachment.html>


More information about the Beowulf mailing list