[Beowulf] getting a phi, dammit.
Mark Hahn
hahn at mcmaster.ca
Tue Mar 5 22:00:46 PST 2013
B41;286;0c> And, up front, I work for Intel and even on Intel(r) Xeon Phi(c)
>coprocessor (yes, that is the official Intel branding!) software. (Though
>I've been on this list since w..a..y back, well before I worked for Intel!)
It's great to have vendors participate!
well, at least in a minimally-advertising role ;)
>> and drop it into most any gen1/2/3 PCIe x16 slot and it'll work (assuming
>> I provide the right power and cooling, of course.)
>
> The issue here is that because we offer 8GB of memory on the cards, some
> BIOSes are unable to map all of it through the PCI either due to bugs or
> failure to support so much memory. This is not the only people suffering
interesting. but it seems like there are quite a few cards out there
with 4-6GB (admittedly, mostly higher-end workstation/gp-gpu cards.)
is this issue a bigger deal for Phi than the Nvidia family?
is it more critical for using Phi in offload mode?
it would be interesting to know how Intel thinks about the issue of
card-host/host-card memory accesses. my understanding of Phi is that
there's a DMA engine that can perform copies across PCIe. and your
comments imply that Phi ram can be mapped directly into the host
virtual space (including user-level?). can code on the Phi
also map host memory into its space?
> (Indeed other threads here have been complaining that 8GB is too little memory).
well, it's not much per-core, especially if, as you suggest elsewhere,
it's important to try to use HT (ie, 8G/120 is only 67M/core...)
I suppose the picture changes if the card can make direct references
to host memory, though.
thanks, mark hahn.
More information about the Beowulf
mailing list