[Beowulf] OT? GPU accelerators for finite difference time domain
Gerry Creager
gerry.creager at tamu.edu
Mon Apr 2 07:04:42 PDT 2007
Richard Walsh wrote:
> Mark Hahn wrote:
>>> The next gen of hardware will support native double precision (AFAIK).
>> my point is that there's native and there's native. if the HW supports
>> doubles, but they take 8x as long, then there's still a huge reason to
>> make sure the program uses only low-precision. and 8x (WAG, of course)
>> may actually be enough so that a 4-core, full-rate SSE CPU to beats it
> I would be surprised if they "faked" double precision is this way. GPUs
> are the widest thing
> you can get in a processor. My WAG is that they will provide true/fast
> 64-bit (minus the same
> IEEE 754 twiddles) by coalescing 32-bit ... reducing the floating point
> width of a given
> core by half, but still delivery lots of FLOPs. Especially with the
> G80, it makes to think of these
> GPUs and multi-core SIMD processors.
In discussions w/ Mike McCool of PeakStream at SC06, I think Mark is
correct. At this time, I believe they're stiull faking DP. Look for
hardware enhancements 3-4Q this calendar year.
Gerry
--
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843
More information about the Beowulf
mailing list