[Beowulf] Intel Phi musings
James Cownie
jcownie at cantab.net
Tue Mar 5 11:54:51 PST 2013
Up front, I work for Intel and, even write software for the Intel(r) Xeon Phi(tm) coprocessor.
On 12 Feb 2013, at 16:38, Richard Walsh wrote:
> Curious about the observed benefits of hyper-threading, which generally offers
> little to floating-point intensive HPC computations where functional unit
> collision is an issue.
There's a big difference between the processors in the Phi, and those in current Xeons.
The Phi CPUs are in-order processors, whereas the Xeons are out of order. On the
Xeons hyper-threading is intended to allow the out of order CPU to schedule operations from either
hardware thread when there are spare functional units that aren't being used. If a single thread
can max-out a functional unit (for instance the floating point ALU) then enabling another hardware
thread is unlikely significantly to improve performance (as you observe!).
However the intent in the in-order processor is different; here the aim is to provide extra
latency tolerance when one thread is stalled waiting for a cache or memory access; in the out
of order core, this is hidden by the out of order mechanism.
So the benefits of running more hardware threads in the Phi can be much larger than in the
big, out of order core, and I would certainly recommend running at least two threads/core
unless you are seriously memory bandwidth bound.
When investigating scaleability on the Phi, my preference is to plot cores along the x-axis and treat
1thread/core, 2threads/core, ... 4threads/core as separate series. I find this easier to understand than
a plot with threads on the x-axis, because it's then hard to distinguish 60threads (== 15 coresx4 threads) from
60 threads (==20coresx3T), 60threads == (30Cx2T) and 60threads (==60Cx1T).
If you're using OpenMP, then the KMP_PLACE_THREADS envirable makes it easy to play with
allocations of that sort.
--
-- Jim
--
James Cownie <jcownie at cantab.net>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20130305/683b77f7/attachment.html>
More information about the Beowulf
mailing list