<div>Joshua,</div>
<div>Great thanks. That was clear and the takeaway is that I should pay attention to the number of memory channels per core (which may be less than 1.0) besides the number of cores and the RAM/core. </div>
<div> </div>
<div>What is the "ncpu" column in Table 1 (for example)? Does the 4 refer to 4 cores, and the 1 and 2 cases don't use all the cores on the motherboard? Or is "ncpu" an application parameter? I read it as "number of CPUs"? I noted that the heart simulation didn't have an ncpu column, but that was why I thought you had multiple nodes going.
</div>
<div> </div>
<div>Thanks very much, </div>
<div>Peter</div>
<div> </div>
<div>P.S. and then where does the billiard cue go?<br><br> </div>
<div><span class="gmail_quote">On 3/8/07, <b class="gmail_sendername">Joshua Baker-LePain</b> <<a href="mailto:jlb17@duke.edu">jlb17@duke.edu</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">On Thu, 8 Mar 2007 at 11:33am, Peter St. John wrote<br><br>> Those benchmarks are quite interesting and I wonder if I interpret them at
<br>> all correctly.<br>> It would seem that the Intel outperforms it's advantage in clockspeed (1/6th<br>> faster, but ballpark 1/3 better performance?) so the question would be<br>> performance gain per dollar cost (which is fine); however, for that heart
<br>> simulation towards the end, it looks like the AMD scales up with increasing<br>> nodecount enormously better, and with several nodes actually outperforms the<br>> faster Intel.<br>> Should I guess at relatively poor performance of the networking on the
<br>> motherboard used with the intel chip or does that have anything to do with<br>> the CPU itself?<br><br>Each benchmark was run on a single sytem with 4 CPUs (or, rather, 4 cores<br>in 2 sockets) -- there was no network involved. The difference (IMO) lies
<br>in the memory subsystems of the 2 architectures.<br><br>Opterons have 1 memory controller per socket (on the CPU, shared by the 2<br>cores) attached to a dedicated bank of memory via a Hypertransport link<br>(referred to from here on as HT). That socket is connected to the other
<br>CPU socket (and its HT connected memory bank) by HT.<br><br>Xeons (still) have a single memory controller hub to which the CPUs<br>communicate via the front side bus (FSB). That single hub has 2 channels<br>to memory.
<br><br>So, yes, clock-for-clock (and for my usage) Xeon 51xxs are faster than<br>Opterons. But, if your code hits memory *really hard* (which that heart<br>model does), then the multiple paths to memory available to the Opterons
<br>allow them to scale better.<br><br>This situation has existed for a long time on the Intel side. For P4<br>based Xeons it was crippling. The new Core based Xeons, however, don't<br>suffer nearly as badly (due to their big cache, maybe?).
E.g. the thermal<br>simulations in that same file are pretty memory intensive themselves, and<br>P4 based Xeons scaled *horribly* on them. But the 51xx Xeons still scale<br>very well on them (which surprised me).<br><br>
--<br>Joshua Baker-LePain<br>Department of Biomedical Engineering<br>Duke University<br></blockquote></div><br>