<html><head><style type="text/css"><!-- DIV {margin:0px;} --></style></head><body><div style="font-family:times new roman,new york,times,serif;font-size:12pt"><div>I disagree with Mark on investing into GP-GPUs. I think it's a good thing to do for the simple reason of understanding the programming model. I've been watching people work with GP-GPUs for several years and there is always this big hump that they have to get over - understanding how to take their algorithm and re-write it for SIMD. Once they get over this hump, then things get easier. This is also independent of precision. It doesn't matter if you learn in SP or DP - as long as you learn.<br><br>I would love to see a common language for GP-GPUs, but my guess is that OpenCL will be a bit slow. In the meantime, CUDA is the leader and gaining ground. I haven't had a chance to talk to PGI about their new compiler that has GP-GPU capability - but it sounds really fantastic (PGI makes a really
great compiler).<br><br>Jeff<br><br>P.S. Sorry for the top posting, but this silly web based email tool can't indent or do much of anything useful :)<br></div><div style="font-family: times new roman,new york,times,serif; font-size: 12pt;"><br><div style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><font size="2" face="Tahoma"><hr size="1"><b><span style="font-weight: bold;">From:</span></b> Mark Hahn <hahn@mcmaster.ca><br><b><span style="font-weight: bold;">To:</span></b> Beowulf Mailing List <beowulf@beowulf.org><br><b><span style="font-weight: bold;">Sent:</span></b> Thursday, November 20, 2008 9:58:27 AM<br><b><span style="font-weight: bold;">Subject:</span></b> Re: [Beowulf] What class of PDEs/numerical schemes suitable for GPU clusters<br></font><br>
> Ellis, I can't say re. the Firestream cards, but for Nvidia the answer is a<br>> resounding yes.<br><br>AMD had some PR recently (check the reg and inq) about supporting their <br>stream stuff across the whole product line, including chipset-integrated<br>gpus. that seems intelligent, given that lines between CPU and GPU are <br>obviously blurring in the future (Larrabee, Fusion, etc).<br><br>IMO, it would be crazy to invest too much in the current gen of gp-gpu <br>programming stuff. doing some pilot stuff with both vendors probably<br>makes sense, but the field really does need OpenCL to succeed. I hope <br>the OpenCL people are not too OpenGL-ish, and realize that they need <br>to target SSE and SSE512 as well.<br><br>> Virtually any recent card can run CUDA code. If you Google you can get a<br>> list of compatible cards.<br><br>not that many NVidia cards support DP yet though, which is probably <br>important to anyone
coming from the normal HPC world... there's some <br>speculation that NV will try to keep DP as a market segmentation <br>feature to drive HPC towards high-cost Tesla cards, much as vendors <br>have traditionally tried to herd high-end vis into 10x priced cards.<br><br>regards, mark hahn.<br>_______________________________________________<br>Beowulf mailing list, <a ymailto="mailto:Beowulf@beowulf.org" href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br></div></div></div></body></html>