[Beowulf] Nvidia Tesla GPU clusters?

Toon Knapen toon.knapen at fft.be
Wed Jul 18 23:26:17 PDT 2007

Mark Hahn wrote:
>>>> http://www.nvidia.com/object/tesla_computing_solutions.html
>>> Anyone can point me to more information about the 'thread execution 
>>> manager' and how threads can enable getting optimal performance out 
>>> of this hardware ?
> afaikt, that phrase is what would normally be "instruction scheduler"
> on a normal CPU.  I don't believe it's actually managing separate
> (MIMD) threads.
>> much a wasteland these days) asking how jobs were going to be
>> scheduled on a GPU. Nobody knew. I would think this would be
> the GPU, at least as currently conceived, is an exclusive resource
> which the kernel can arbitrate access to.  if you really do have several 
> GPU-using processes, the kernel would need to swap in/out the GPU state 
> on each transition, which would be painful.  GPUs don't have protection 
> for shared access afaikt. 

I thought so too: I always considered the GPU to be a real good 
vector-processor. But seeing the 'thread execution manager' stuff I was 
now wondering if it would also be able to exploit the power of the GPU 
using multiple (non-vector) threads.


More information about the Beowulf mailing list