[Beowulf] Nvidia, cuda,
tesla and... where's my double floating point?
diep at xs4all.nl
Sun Jun 15 04:31:12 PDT 2008
Seems the next CELL is 100% confirmed double precision.
Yet if you look back in history, Nvidia promises on this can be found
The only problem with hardware like Tesla is that it is rather hard to
get technical information; like which instructions does Tesla support
This is crucial to know in order to speedup your code.
It is already tough to get realworld codes on GPU's faster than at
The equivalent CPU code has been optimized real bigtime,
knowing everything about hardware.
How fast is latency from RAM when all 128 SP's are busy with that?
Nvidia gives out zero information and doesn't support anyone either
That has to change in order to get GPU calculations more into
When i calculate on paper for some applications, a GPU can be
factor 4-8 faster than a standard quadcore 2.4ghz is right now.
Getting that performance out of the GPU is more than a fulltime task
without having indepth technical hardware data on the GPU.
On May 5, 2008, at 9:40 PM, John Hearns wrote:
> On Fri, 2008-05-02 at 14:05 +0100, Ricardo Reis wrote:
>> Does anyone knows if/when there will be double floating point on
>> little toys from nvidia?
> I think CUDA is a gret concept, and am starting to work with it at
> I recently went to a talk by David Kirk, as part of the "world tour".
> I think the answer to your question is Real Soon Now.
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf