<div><br></div>Hey Stuart,<div><br></div><div>Mmm ... interesting.</div><div><br></div><div>As I understand it the name K10 corresponds to the GK104 which is really</div><div>really a graphics-oriented chip. It is the K20 or GK110 that is HPC (GP GPU)</div>
<div>version of the Kepler and the right one to make the comparison too. </div><div><br></div><div>Here is the white paper:</div><div><br></div><div><a href="http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf">http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf</a></div>
<div><br></div><div>One wonders if you are running in single or double precision (maybe you</div><div>told me) because the GK110 has 192 single precision cores per SMX unit</div><div>while only 64 double precision cores (3 to 1 ratio rather than the typical</div>
<div>2 to 1). It would be interesting to to see data from this comparison. Doing</div><div>some math.</div><div><br></div><div>Single precision:</div><div><br></div><div>GK110 15 SMX units x 192 SP cores == 2880 SP ops/clock 0.738 GHz == 2125 SP GFLOPS</div>
<div><br></div><div>Phi 60 PHI cores x 16 SP vectors == 960 SP ops/clock 1.100 GHz == 1056 SP GFLOPS</div><div><br></div><div>Double precision:</div><div><br></div><div>GK110 15 SMX units x 64 DP cores == 960 DP ops/clock 0.738 GHz == 708 DP GFLOPS</div>
<div><br></div><div>Phi 60 PHI cores x 8 DP vectors == 480 DP ops/clock 1.100 GHz == 528 DP GFLOPS</div><div><br></div><div>These peak numbers (assuming I got the math right) of course do not dictate real code</div>
<div>performance outcomes where effective memory bandwidth will make a large contribution.</div><div>Still it looks like the GK110 should have the performance edge (if not the productivity edge).</div><div><br></div><div>
rbw</div><div><br></div><div><br></div><div><br><div class="gmail_quote">On Fri, Feb 22, 2013 at 7:44 AM, Dr Stuart Midgley <span dir="ltr"><<a href="mailto:sdm900@gmail.com" target="_blank">sdm900@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We have a code written on both the Phi and K10's and they give about the same performance (both highly optimised finite difference codes).<br>
<div class="im HOEnZb"><br>
<br>
<br>
<br>
--<br>
Dr Stuart Midgley<br>
<a href="mailto:sdm900@sdm900.com">sdm900@sdm900.com</a><br>
<br>
<br>
<br>
<br>
</div><div class="HOEnZb"><div class="h5">On 15/02/2013, at 4:53 AM, Richard Walsh <<a href="mailto:rbwcnslt@gmail.com">rbwcnslt@gmail.com</a>> wrote:<br>
<br>
><br>
> Hey Stuart,<br>
><br>
> Thanks much for the detail.<br>
><br>
> So, if I am reading you correctly your test was on a single<br>
> physical PHI (you will later expand to multiple PHIs). This<br>
> was a highly parallel single precision application which showed<br>
> the expected linear speed up to 60 cores ... then a kink as you<br>
> cross into hyper-threaded operation with a 1/2 as steep slope<br>
> up to factor of two to 120 core-equivalents with a 4 to 1 over<br>
> subscription of hyper-threads. This was all done with the Intel<br>
> compilers on an unmodified pthreaded code that is well-vectored.<br>
><br>
> A good result ... on an application that is a perfect candidate<br>
> for PHI. To run elsewhere with CUDA, OpenMP, or OpenACC<br>
> directives would require quite a bit of recoding which you were<br>
> happy to avoid. My guess is if you had a CUDA implementation<br>
> you would see better performance on a FERMI or KEPLER,<br>
> but that is a programming path you do not wish to take.<br>
><br>
> This is an interesting case to hear about. The flack (technical<br>
> marketing) from NVIDIA is to focus on the difficulty of using<br>
> the 'offload' model and Intel extensions to OpenMP, Cylk, etc.,<br>
> articulate their hardware's performance advantages, and talk<br>
> about OpenACC. These arguments are not unreasonable, but<br>
> clearly not universallydeciding.<br>
><br>
> Thanks much ... and good luck getting all your other codes<br>
> to scale just as well.<br>
><br>
> rbw<br>
><br>
> On Thu, Feb 14, 2013 at 10:18 AM, Dr Stuart Midgley <<a href="mailto:sdm900@gmail.com">sdm900@gmail.com</a>> wrote:<br>
> Evening<br>
><br>
> Sorry for the slow response.<br>
><br>
> Most of our codes are pthreads, we have avoided MPI and OpenMP as much as possible. Our current cluster consists of Nehalem, Westmere, Sandy Bridge and Interlagos of various flavours. Our Phi cards are in Sandy Bridge systems (host machine has 16 cores with 128GB ram). We run the intel compilers.<br>
><br>
> Our fastest systems are the 64core Interlagos systems (256GB ram) running at 2.6GHz. For a few of our most important kernels, a single phi had greater throughput than a whole node. Which, if you count the flops, is expected. The Phi's have a massive amount of single precision floating point performance (our codes are single precision).<br>
><br>
> Our kernels vectorise very well (lots of hand coded SSE3) and are expected for run very well on the phi (we haven't tested these codes yet). The codes we have tested are trivially parallel and very FP heavy - they ported easily to the phi and run very well.<br>
><br>
> The codes I tested (in like 2hrs) saw linear speedup to 60cores and then a "kink" in performance and then continued performance gains right up to 240 threads. Essentially these codes are single cpu with a trivial wrapper around them to hand out work. This is exactly what hyper threading was designed to help. So at 240 threads, we were about 120 times faster than a single thread of this code. At 60 threads, we were 60 times faster :)<br>
><br>
> Again, since the codes I tested were small data in, small data out and heavy compute and trivially parallel, running over multiple phi's is trivial and provide linear performance gains. As we start porting more of our complex codes, I expect to see similar gains. Our codes already run very very well on 64 cores…<br>
><br>
> The phi's are separate cards, in separate pcie slots. I have not delved into the programming api's fully, but I suspect you can utilise the one phi card for your threaded codes. The way I've been running is with a native phi application (basically using the Phi as a separate linux cluster node)… using it in offload mode is very different and you may well be able to get your kernel running across both with the right pragmas.<br>
><br>
> To be 100% honest, we took the boots and all approach. If we only purchased 1 phi to test on, we would never expend the energy to port all our codes. Purchasing hundreds of them gives you a lot of impetus to port your codes quickly :)<br>
><br>
><br>
> --<br>
> Dr Stuart Midgley<br>
> <a href="mailto:sdm900@sdm900.com">sdm900@sdm900.com</a><br>
><br>
><br>
><br>
><br>
> On 13/02/2013, at 12:38 AM, Richard Walsh <<a href="mailto:rbwcnslt@gmail.com">rbwcnslt@gmail.com</a>> wrote:<br>
><br>
> ><br>
> > Hey Stuart,<br>
> ><br>
> > Thanks for your answer ...<br>
> ><br>
> > That sounds compelling. May I ask a few more questions?<br>
> ><br>
> > So should I assume that this was a threaded SMP type application<br>
> > (OpenMP, pthreads) or it is MPI based? Is the supporting CPU of the<br>
> > multi-core Sandy Bridge vintage? Have you been able to compare<br>
> > the hyper-threaded, multi-core scaling on that Sandy Bridge side of the<br>
> > system with that on the Phi (fewer cores to compare of course). Using the<br>
> > Intel compilers I assume ... how well do your kernels vectorize? Curious<br>
> > about the observed benefits of hyper-threading, which generally offers<br>
> > little to floating-point intensive HPC computations where functional unit<br>
> > collision is an issue. You said you have 2 Phis per node. Were you<br>
> > running a single job across both? Were the Phis in separate PCIE<br>
> > slots or on the same card (sorry I should know this, but I have just<br>
> > started looking at Phi). If they are on separate cards in separate<br>
> > slots can I assume that I am limited to MPI parallel implementations<br>
> > when using both.<br>
> ><br>
> > Maybe that is more than a few questions ... ;-) ...<br>
> ><br>
> > Regards,<br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>