<div dir="ltr"><div dir="ltr"><div><br></div><div>Hi all,<br></div><div><br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
This is, in my humble opinion, also the big problem CPUs are facing. They are <br>
build to tackle all possible scenarios, from simple integer to floating point, <br>
from in-memory to disc I/O. In some respect it would have been better to stick <br>
with a separate math unit which then could be selected according to your <br>
workload you want to run on that server. I guess this is where the GPUs are <br>
trying to fit in here, or maybe ARM. <br></blockquote><div><br></div><div> I recall a few years ago the rumors that the Argonne "A18" system was going to use the 'Configurable Spatial Accelerators' that Intel was developing, with the idea being you <i>could</i> reconfigure based on the needs of the code. In principle, it sounds like the Holy Grail, but in practice it seems quite difficult, and I don't believe I've heard much more about the CSA approach since. <br></div><div><br></div><div>WikiChip on the CSA: <a href="https://en.wikichip.org/wiki/intel/configurable_spatial_accelerator">https://en.wikichip.org/wiki/intel/configurable_spatial_accelerator</a></div><div>NextPlatform article: <a href="https://www.nextplatform.com/2018/08/30/intels-exascale-dataflow-engine-drops-x86-and-von-neuman/">https://www.nextplatform.com/2018/08/30/intels-exascale-dataflow-engine-drops-x86-and-von-neuman/</a></div><div><br></div><div> I have to imagine that research hasn't gone fully quiet, especially with Intel's moves towards oneAPI and their FPGA experiences, but I haven't seen anything about it in a while. Of course....<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I also agree with the compiler "problem". If you are starting to push some <br>
compilers too much, the code is running very fast but the results are simply <br>
wrong. Again, in an ideal world we have a compiler for the job for the given <br>
hardware which also depends on the job you want to run. <br></blockquote><div><br></div><div> ... It exacerbates the compiler issues, <i>I think</i>. I hesitate to say it does so definitively, since the patent write-up talks about how the CSA architecture uses a representation very similar to what the (now old) Intel compilers created as an IR (intermediate representation). In my opinion, having a compiler that can 'do everything' is like having an AI that can do everything - we're good at very, <i>very</i> specific use-cases, but not generality. So configurable systems are a big challenge. (I'm <i>way</i> out of my depth on compilers, though - maybe they're improving massively?)<br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Maybe the whole climate problem will finally push HPC into the more bespoken <br>
system where the components are fit for the job in question, say weather <br>
modeling for example, simply as that would be more energy efficient and <br>
faster. <br></blockquote><div><br></div><div> I can't speak to whether climate research will influence hardware, but back to the <i>original</i> theme of this thread, I actually had some data -very <i>limited</i> data, mind you!- on how NCAR's climate model, CESM, run in an 'F2000climo' case (one of many, many cases, and very atmospheric focused) at 2-degree atmosphere resolution (<i>very</i> coarse) on a 36-core Xeon Skylake performs across AVX2, AVX512 and AVX512+FMA. By default, FMA is turned off in these cases due to numerical sensitivity. So, that's a <i>very</i> specific case, but on the off chance people are curious, here's what it looks like - note that this is <i>noisy</i> data, because the model also does a lot of I/O, hence why I tend to look at median times, in blue below:<br></div><div><br></div><div><span style="color:rgb(0,0,0);font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px;text-decoration:none"><table dir="ltr" style="table-layout:fixed;font-size:10pt;font-family:Arial;width:0px;border-collapse:collapse;border:medium none" cellspacing="0" cellpadding="0" border="1"><colgroup><col width="114"><col width="121"><col width="119"><col width="107"></colgroup><tbody><tr style="height:21px"><td rowspan="1" colspan="4" style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;background-color:rgb(234,209,220);text-align:center">SKX (AWS C5N.18xlarge) Performance Comparison</td></tr><tr style="height:21px"><td rowspan="1" colspan="4" style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;background-color:rgb(217,234,211);text-align:center">CESM Case: F2000climo @ f19_g17 resolution<br>(36 cores each component / 10 model day run, skipping 1st and last)</td></tr><tr style="height:21px"><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom">Flags</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom">AVX2 (no FMA)</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom">AVX512 (no FMA)</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom">AVX512 + FMA</td></tr><tr style="height:21px"><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom">Min</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;text-align:right">60.18</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;text-align:right">60.24</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;text-align:right">59.16</td></tr><tr style="height:21px"><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom">Max</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;text-align:right">66.26</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;text-align:right">60.47</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;text-align:right">59.40</td></tr><tr style="height:21px"><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;background-color:rgb(207,226,243)">Median</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;background-color:rgb(207,226,243);color:rgb(0,0,0);text-align:right">60.28</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;background-color:rgb(207,226,243);color:rgb(0,0,0);text-align:right">60.38</td><td style="border:1px solid rgb(204,204,204);overflow:hidden;padding:2px 3px;vertical-align:bottom;background-color:rgb(207,226,243);color:rgb(0,0,0);text-align:right">59.32</td></tr></tbody></table></span></div><div><br></div><div> The take-away? We're not really benefiting <i>at all</i> (at this resolution, for this compset, etc) from AVX512 here. Maybe at higher resolution? Maybe with more vertical levels, or chemistry, or something like that? <i>Maybe</i>, but differences seem indistinguishable from noise here, and possibly negative! Now, give us more <i>memory bandwidth</i>, and that's fantastic. Could this code be rewritten to take better advantage of larger vectors? Sure, and some <i>really</i> capable people do work on that sort of stuff, and it helps, but as an <i>evolution</i> in performance, not a revolution in it.</div><div><br></div><div> (Also, I'm always horrified by presenting one-off tests as examples of anything, but it's the only data I have on-hand! Other cases may indeed vary.)<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Before somebody comes along with: but but but it costs! Think about how much <br>
money is being spent simply to kill people, or at other wasteful project like <br>
Brexit etc. <br></blockquote><div><br></div><div> One can only hope. When it comes to spending on research, I recall the quote:<br></div><div> "If you think education is expensive, try ignorance!"<br></div><div> </div><div> Cheers,</div><div> - Brian</div><div><br></div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Am Montag, 21. Juni 2021, 14:46:30 BST schrieb Joe Landman:<br>
> On 6/21/21 9:20 AM, Jonathan Engwall wrote:<br>
> > I have followed this thinking "square peg, round hole."<br>
> > You have got it again, Joe. Compilers are your problem.<br>
> <br>
> Erp ... did I mess up again?<br>
> <br>
> System architecture has been a problem ... making a processing unit<br>
> 10-100x as fast as its support components means you have to code with<br>
> that in mind. A simple `gfortran -O3 mycode.f` won't necessarily<br>
> generate optimal code for the system ( but I swear ... -O3 ... it says<br>
> it on the package!)<br>
> <br>
> Way back at Scalable, our secret sauce was largely increasing IO<br>
> bandwidth and lowering IO latency while coupling computing more tightly<br>
> to this massive IO/network pipe set, combined with intelligence in the<br>
> kernel on how to better use the resources. It was simply a better<br>
> architecture. We used the same CPUs. We simply exploited the design<br>
> better.<br>
> <br>
> End result was codes that ran on our systems with off-cpu work (storage,<br>
> networking, etc.) could push our systems far harder than competitors. <br>
> And you didn't have to use a different ISA to get these benefits. No<br>
> recompilation needed, though we did show the folks who were interested,<br>
> how to get even better performance.<br>
> <br>
> Architecture matters, as does implementation of that architecture. <br>
> There are costs to every decision within an architecture. For AVX512,<br>
> along comes lots of other baggage associated with downclocking, etc. <br>
> You have to do a cost-benefit analysis on whether or not it is worth<br>
> paying for that baggage, with the benefits you get from doing so. Some<br>
> folks have made that decision towards AVX512, and have been enjoying the<br>
> benefits of doing so (e.g. willing to pay the costs). For the general<br>
> audience, these costs represent a (significant) hurdle one must overcome.<br>
> <br>
> Here's where awesome compiler support would help. FWIW, gcc isn't that<br>
> great a compiler. Its not performance minded for HPC. Its a reasonable<br>
> general purpose standards compliant (for some subset of standards)<br>
> compilation system. LLVM is IMO a better compiler system, and its<br>
> clang/flang are developing nicely, albeit still not really HPC focused. <br>
> Then you have variants built on that. Like the Cray compiler, Nvidia<br>
> compiler and AMD compiler. These are HPC focused, and actually do quite<br>
> well with some codes (though the AMD version lags the Cray and Nvidia<br>
> compilers). You've got the Intel compiler, which would be a good general<br>
> compiler if it wasn't more of a marketing vehicle for Intel processors<br>
> and their features (hey you got an AMD chip? you will take the slowest<br>
> code path even if you support the features needed for the high<br>
> performance code path).<br>
> <br>
> Maybe, someday, we'll get a great HPC compiler for C/Fortran.<br>
<br>
<br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div></div>