<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On 7 Mar 2013, at 17:29, Vincent Diepeveen wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div><br>On Mar 6, 2013, at 9:42 PM, James Cownie wrote:<br><br><blockquote type="cite"><br></blockquote><blockquote type="cite">On 6 Mar 2013, at 06:00, Mark Hahn wrote:<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">The issue here is that because we offer 8GB of memory on the cards, some<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">BIOSes are unable to map all of it through the PCI either due to bugs or<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">failure to support so much memory. This is not the only people suffering<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">interesting. but it seems like there are quite a few cards out there<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">with 4-6GB (admittedly, mostly higher-end workstation/gp-gpu cards.)<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">is this issue a bigger deal for Phi than the Nvidia family?<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">is it more critical for using Phi in offload mode?<br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I think this was answered by Brice in another message. We map all of the memory<br></blockquote><blockquote type="cite">through the PCI, whereas many other people only map a smaller buffer, and therefore<br></blockquote><blockquote type="cite">have to do additional copies.<br></blockquote><br>James, not really following exactly what you mean by 'through the PCI'.<br><br></div></blockquote>I mean that all of the memory on the card can be seen from the host via the PCI, and,</div><div>therefore, that PCI transfers to the memory on the card can be directly to the final destination.</div><div><br></div><div>The alternative would be to map a smaller window of memory on the card for PCI transfers</div><div>(thus avoiding the large PCI aperture issue which is where we came into this), but requiring</div><div>that to move data to/from an arbitrary memory location on the card you' d have to DMA it across the</div><div>PCI to the buffer space, then copy it to the final destination (or the reverse for transferring to the host, </div><div>of course).</div><div><br></div><div><blockquote type="cite"><div>If you do memory through the pci, isn't that factor 10+ worse in bandwidth than when using device RAM?<br></div></blockquote><div>Yes, of course. I'm not advocating doing this at user level, rather we're discussing the underlying mechanisms</div><div>used for the PCI copying that supports the transfers of data when the programmer has requested that</div><div>via whatever mechanism you choose to use.</div><br><blockquote type="cite"><div>What matters is how much RAM you can allocate on the device for your threads of course.<br></div></blockquote>Absolutely, which is why we put as much memory as we can on the card.</div><div><br><blockquote type="cite"><div>Anything you ship through that PCI is going to be that slow in terms of bandwidth,<br>that you just do not want to do that and really want to limit it.<br></div></blockquote>And again, absolutely!</div><div><br><blockquote type="cite"><div>If you transfer data from HOST (the cpu's) to the GPU, then AMD and Nvidia gpgpu cards can do that<br>without stopping the gpu cores from calculation. So it happens in background. In this manner you need of<br>course a limited buffer.<br></div></blockquote>I am not an NVidia or AMD expert, but it seems to me that you must have to copy the data from the</div><div>buffer on the card that was PCI mapped and accessible to the host to where it finally wants to reside.</div><div>And that is the "extra copy" I mentioned originally. Even if you do that with a block-copy engine it still</div><div>has to eat bandwidth while you're doing it., even if it is asynchronous to the FPUs.</div><div><br></div><div>You can, of course, do asynchronous transfers on the MIC too.</div><div><br><blockquote type="cite"><div>A problem some report with OpenCL is that if they by accident overallocate the amount of RAM they want to<br>use on the gpu, that it is allocating Host memory, which as said before is pretty slow. Really more than factor 10.<br></div></blockquote></div><br><div>Right, I think we're (perhaps surprisingly :-) ) in violent agreement.</div><div><br></div><div><div><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; ">--</font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; ">-- Jim</font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; ">--</font></div><div style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><font face="Helvetica" size="3" style="font: normal normal normal 12px/normal Helvetica; ">James Cownie <<a href="mailto:jcownie@cantab.net">jcownie@cantab.net</a>></font></div><br></div></div></span></span><br class="Apple-interchange-newline"></div></div></body></html>