[Beowulf] AMD performance (was 500GB systems)
Stu Midgley
sdm900 at gmail.com
Mon Jan 14 00:20:59 PST 2013
As you might guess, we were very happy with how our codes run on the Phi
and the time/effort required to port. It is very very simple to use and
the performance is excellent :) With no tuning (just recompile) we saw a
single phi go at about 1.7x faster than our current AMD 64 cores nodes.
On Sun, Jan 13, 2013 at 10:21 AM, Bill Broadley <bill at cse.ucdavis.edu>wrote:
> On 01/12/2013 04:25 PM, Stu Midgley wrote:
> > Until the Phi's came along, we were purchasing 1RU, 4 sockets nodes
> > with 6276's and 256GB ram. On all our codes, we found the throughput
> > to be greater than any equivalent density Sandy bridge systems
> > (usually 2 x dual socket in 1RU) at about 10-15% less energy and
> > about 1/3 the price for the actual CPU (save a couple thousand $$ per
> > 1RU).
>
> For many workloads we found similar. The last few generations of AMD
> CPUs have had 4 memory channels per socket. At first I was puzzled that
> even fairly memory intensive codes scaled well.
>
> Even following a random pointer chain performance almost doubled when I
> tested with 2 threads per memory channel instead of 1.
>
> Then I realized the L3 latency is almost half of the latency to main
> memory. So you get significant throughput advantages by having a queue
> of L3 cache misses waiting for the instant any of the memory channels
> free up.
>
> In fact even with 2 jobs per memory channel sometimes the memory channel
> goes idle. Even 4 jobs jobs per memory channel sees some increases.
> The good news is that most codes aren't as memory bandwidth/latency
> intensive as the related micro benchmarks (and therefore scale better).
>
> I think the more cores per memory channel is a key part of AMDs improved
> throughput per socket when compared to Intel. Not always true of
> course, again it's highly application dependent.
>
> > Of course, we are now purchasing Phi's. First 2 racks meant to turn
> > up this week.
>
> Interesting, please report back on anything of interest that you find.
>
--
Dr Stuart Midgley
sdm900 at sdm900.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20130114/d2850710/attachment.html>
More information about the Beowulf
mailing list