<br>Hi Greg,<br><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="im">
> Well, clearly we hope to move more towards hybrid methods -all that's old<br>
> is new again?-<br>
<br>
</div>If you want bad performance, sure. If you want good performance, you<br>
want a device which supports talking to a lot of cores, and then<br>
multiple devices per node, before you go hybrid. The first two don't<br>
require changing your code. The last does.<br></blockquote><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
The main reason to use hybrid is if there isn't enough parallelism in<br>
your code/dataset to use the cores independently.<br></blockquote><div><br> Actually, it's often <i>for</i> performance that we look towards hybrid methods, albeit in an indirect way - with RAM amounts per node increasing at the same or lesser rate than cores, and with each MPI task on <i>some</i> of our codes having a pretty hefty memory footprint, using fewer MPI processes and more threads per task lets us fully utilize nodes that would otherwise have cores sitting idle due to a lack of available memory. Sure, we could rewrite the code to tackle this, too, but in general it seems easier to add threading in than to rework a complicated parallel decomposition, shared buffers, etc.<br>
<br> In a nutshell, even if a hybrid mode <i>costs</i> me 10-20% over a direct mode with an equal number of processors, if it allows me to use 50% more cores in a node, it works out well for us. But yes, ignoring RAM constraints, non-hybrid parallelism tends to be nicer at the moment.<br>
</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="im">> But getting back to a technical vein, is the multiplexing an issue due to<br>
> atomic locks on mapped memory pages? Or just because each copy reserves its<br>
> own independent buffers? What are the critical issues?<br>
<br>
</div>It's all implementation-dependent. A card might have an on-board<br>
memory limit, or a limited number of "engines" which process<br>
messages. Even if it has a option to store some data in main memory,<br>
often that results in a scalability hit.<br></blockquote><div><br> Thanks. I guess I need to read up on quite a bit more and set up some tests.<br><br> Cheers,<br> - Brian<br><br></div></div>