[Beowulf] Nvidia Tesla GPU clusters?
Mark Hahn
hahn at mcmaster.ca
Wed Jul 18 20:59:20 PDT 2007
>>> http://www.nvidia.com/object/tesla_computing_solutions.html
>>
>> Anyone can point me to more information about the 'thread execution
>> manager' and how threads can enable getting optimal performance out of this
>> hardware ?
afaikt, that phrase is what would normally be "instruction scheduler"
on a normal CPU. I don't believe it's actually managing separate
(MIMD) threads.
> much a wasteland these days) asking how jobs were going to be
> scheduled on a GPU. Nobody knew. I would think this would be
the GPU, at least as currently conceived, is an exclusive resource
which the kernel can arbitrate access to. if you really do have
several GPU-using processes, the kernel would need to swap in/out
the GPU state on each transition, which would be painful. GPUs
don't have protection for shared access afaikt.
this may change, as Intel Larrabee and AMD Fusion develop. the former
appears to be a handful-cored x86-based chip with some mods to make graphics
more efficient (wide SIMD, basically). it would be strange if this didn't
have the minimal hardware support necessary to make it work in a multi-user
(protected) environment. Fusion is, afaikt, even less well-defined, but
appears to be some onchip agglomeration of mixed cpu/gpu hardware. I can't
tell whether it attempts to provide a familiar programming model (that is,
basically extensions to the cpu, such as wider SIMD.)
regards, mark hahn.
More information about the Beowulf
mailing list