<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Scott (and Michael and Carlos), <br>
</p>
<p>Thanks for your excellent feedback. That's the kind of
enlightening feedback I was looking for. Interesting that the HBM
on Fugaku exceeds the needs of the processor. <br>
</p>
<pre class="moz-signature" cols="72">Prentice
On 6/16/21 2:23 PM, Scott Atchley wrote:
</pre>
<blockquote type="cite"
cite="mid:CAL8g0jL8R+HEB9MZnM-0Bh6UzyhUQs7VfJvQbHbnpO6yuXj7tg@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">On Wed, Jun 16, 2021 at 1:15 PM Prentice Bisbal
via Beowulf <<a href="mailto:beowulf@beowulf.org"
moz-do-not-send="true">beowulf@beowulf.org</a>> wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Did
anyone else attend this webinar panel discussion with AMD
hosted by <br>
HPCWire yesterday? It was titled "AMD HPC Solutions:
Enabling Your <br>
Success in HPC"<br>
<br>
<a
href="https://www.hpcwire.com/amd-hpc-solutions-enabling-your-success-in-hpc/"
rel="noreferrer" target="_blank" moz-do-not-send="true">https://www.hpcwire.com/amd-hpc-solutions-enabling-your-success-in-hpc/</a><br>
<br>
I attended it, and noticed there was no mention of AMD
supporting <br>
AVX512, so during the question and answer portion of the
program, I <br>
asked when AMD processors will support AVX512. The answer
given, and I'm <br>
not making this up, is that AMD listens to their users and
gives the <br>
users what they want, and right now they're not hearing
any demand for <br>
AVX512.<br>
<br>
Personally, I call BS on that one. I can't imagine anyone
in the HPC <br>
community saying "we'd like processors that offer only 1/2
the floating <br>
point performance of Intel processors". Sure, AMD can
offer more cores, <br>
but with only AVX2, you'd need twice as many cores as
Intel processors, <br>
all other things being equal.<br>
<br>
Last fall I evaluated potential new cluster nodes for a
large cluster <br>
purchase using the HPL benchmark. I compared a server with
dual AMD EPYC <br>
7H12 processors (128) cores to a server with quad Intel
Xeon 8268 <br>
processors (96 cores). I measured 5,389 GFLOPS for the
Xeon 8268, and <br>
only 3,446.00 GFLOPS for the AMD 7H12. That's LINPACK
score that only <br>
64% of the Xeon 8268 system, despite having 33% more
cores.<br>
<br>
From what I've heard, the AMD processors run much hotter
than the Intel <br>
processors, too, so I imagine a FLOPS/Watt comparison
would be even less <br>
favorable to AMD.<br>
<br>
An argument can be made that for calculations that lend
themselves to <br>
vectorization should be done on GPUs, instead of the main
processors but <br>
the last time I checked, GPU jobs are still memory is
limited, and <br>
moving data in and out of GPU memory can still take time,
so I can see <br>
situations where for large amounts of data using CPUs
would be preferred <br>
over GPUs.<br>
<br>
Your thoughts?<br>
<br>
-- <br>
Prentice<br>
</blockquote>
<div><br>
</div>
<div>AMD has studied this quite a bit in DOE's FastForward-2
and PathForward. I think Carlos' comment is on track.
Having a unit that cannot be fed data quick enough is
pointless. It is application dependent. If your working
set fits in cache, then the vector units work well. If
not, you have to move data which stalls compute pipelines.
NERSC saw only a 10% increase in performance when moving
from low core count Xeon CPUs with AVX2 to Knights Landing
with many cores and AVX-512 when it should have seen an
order of magnitude increase. Although Knights Landing had
MCDRAM (Micron's not-quite HBM), other constraints limited
performance (e.g., lack of enough memory references in
flight, coherence traffic).</div>
<div><br>
</div>
<div>Fujitsu's ARM64 chip with 512b SVE in Fugaku does much
better than Xeon with AVX-512 (or Knights Landing) because
of the High Bandwidth Memory (HBM) attached and I assume a
larger number of memory references in flight. The downside
is the lack of memory capacity (only 32 GB per node). This
shows that it is possible to get more performance with a
CPU with a 512b vector engine. That said, it is not clear
that even this CPU design can extract the most from the
memory bandwidth. If you look at the increase in memory
bandwidth from Summit to Fugaku, one would expect
performance on real apps to increase by that amount as
well. From the presentations that I have seen, that is not
always the case. For some apps, the GPU architecture, with
its coherence on demand rather than with every operation,
can extract more performance.</div>
<div><br>
</div>
<div>AMD will add 512b vectors if/when it makes sense on
real apps. </div>
</div>
</div>
</div>
</blockquote>
</body>
</html>