[Beowulf] OT? GPU accelerators for finite difference time domain

Mark Hahn hahn at mcmaster.ca
Sun Apr 1 13:07:07 PDT 2007


> CUDA comes with a full BLAS and FFT library (for 1D,2D and 3D transforms).

I read the CUDA doc, but I guess I was focusing on the language itself.

> You can have relevant speed up  even for 2D transforms or for a batch of 1Ds.

I assume this is only single-precision, and I would guess that for 
numerical stability, you must be limited to fairly short fft's.
what kind of peak flops do you see?  what's the overhead of shoving 
data onto the GPU, and getting it back?  (or am I wrong that the GPU 
cannot do an FFT in main (host) memory?

> You can offload only compute intendive parts of your code to the GPU
> from C and C++ ( writing a wrapper from Fortran should be trivial).

sure, but what's the cost (in time and CPU overhead) to moving data 
around like this?

> The current generation of the hardware supports only single precision,
> but there will be a double precision version towards the end of the
> year.

do you mean synthetic doubles?  I'm guessing that the hardware isn't
going to gain the much wider multipliers necessary to support doubles 
at the same latency as singles...

> PS: I work on CUDA at Nvidia, so I may be a little biased...

I did guess from the nvidia-limited nature of your reply,
but thanks for confirming it.

>> as far as I know, there are not any well-developed libraries which simply

by "well-developed", I did also mean "runs on any GPU or at least not a
single vendor"...



More information about the Beowulf mailing list