<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Well said. Expanding on this, caches work because of both temporal locality and<div class="">spatial locality. Spatial locality is addressed by having cache lines be substantially</div><div class="">larger than a byte or word. These days, 64 bytes is pretty common. Some prefetch schemes, </div><div class="">like the L1D version that fetches the VA ^ 64 clearly affect spatial locality. Streaming </div><div class="">prefetch has an expanded notion of “spatial” I suppose!</div><div class=""><br class=""></div><div class="">What puzzles me is why compilers seem not to have evolved much notion of cache management. It </div><div class="">seems like something a smart compiler could do. Instead, it is left to Prof. Goto and the folks</div><div class="">at ATLAS and BLIS to figure out how to rewrite algorithms for efficient cache behavior. To my</div><div class="">limited knowledge, compilers don’t make much use of PREFETCH or any non-temporal loads and stores</div><div class="">either. It seems to me that once the programmer helps with RESTRICT and so forth, then compilers could perfectly well dynamically move parts of arrays around to maximize cache use.</div><div class=""><br class=""></div><div class="">-L<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 2021, Sep 20, at 6:35 AM, Jim Cownie <<a href="mailto:jcownie@gmail.com" class="">jcownie@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><blockquote type="cite" class=""><div dir="auto" class="">Eadline's Law : Cache is only good the second time.</div></blockquote><div class=""><br class=""></div>Hmm, that’s why they have all those clever pre-fetchers which try to guess your memory access patterns and predict what's going to be needed next.<div class="">(Your choice whether you read “clever” in a cynical voice or not :-))<br class=""><div class="">*IF* that works, then the cache is useful the first time.</div><div class="">If not, then they can mess things up royally by evicting stuff that you did want there.<br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On 19 Sep 2021, at 12:02, John Hearns <<a href="mailto:hearnsj@gmail.com" class="">hearnsj@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="auto" class="">Eadline's Law : Cache is only good the second time.</div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 17 Sep 2021, 21:25 Douglas Eadline, <<a href="mailto:deadline@eadline.org" class="">deadline@eadline.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">--snip--<br class="">
><br class="">
> Where I disagree with you is (3). Whether or not cache size is important<br class="">
> depends on the size of the job. If your iterating through data-parallel<br class="">
> loops over a large dataset that exceeds cache size, the opportunity to<br class="">
> reread cached data is probably limited or nonexistent. As we often say<br class="">
> here, "it depends". I'm sore someone with better low-level hardware<br class="">
> knowledge will pipe in and tell me why I'm wrong (Cunningham's Law).<br class="">
><br class="">
<br class="">
Of course it all depends. However, as core counts go up, a<br class="">
fixed amount of cache must get shared. Since the high core counts<br class="">
are putting pressure on main memory BW, cache gets more<br class="">
important. This is why AMD is doing V-cache for new processors.<br class="">
Core counts have outstripped memory BW, their solution<br class="">
seems to be big caches. And, cache is only good the second time :-)<br class="">
<br class="">
<br class="">
-- big snip--<br class="">
<br class="">
-- <br class="">
Doug<br class="">
<br class="">
_______________________________________________<br class="">
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" rel="noreferrer" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer noreferrer" target="_blank" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class="">
</blockquote></div>
_______________________________________________<br class="">Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class=""></div></blockquote></div><br class=""><div class="">
<div style="letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="">-- Jim<br class="">James Cownie <<a href="mailto:jcownie@gmail.com" class="">jcownie@gmail.com</a>><br class="">Mob: +44 780 637 7146<br class=""><br class=""><br class=""><br class=""></div></div>
</div>
<br class=""></div></div></div>_______________________________________________<br class="">Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" class="">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br class="">To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" class="">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br class=""></div></blockquote></div><br class=""></div></body></html>