<br><br><div class="gmail_quote">2008/11/18 Finch, Ralph <span dir="ltr"><<a href="mailto:rfinch@water.ca.gov">rfinch@water.ca.gov</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
p Given the very substantial speed improvements with GPUs,<br>
will there be a movement to GPU clusters, even if there is a substantial<br>
cost in problem reformulation? Or are GPUs only suitable for a rather<br>
narrow range of numerical problems?<br>
<br>
</blockquote><div><br>My take? Yes, there WILL be a movement to GPU clusters. Note the tense. It has not happened yet.<br><br>Speaking as someone responsible for running commercial codes on clusters, I've recently been talking to a former colleague in medical imaging whose group is getting good results with CUDA, and someone who is getting good results in CFD work.<br>
BUT if you are not writing your own codes, you should be looking at a Beowulf type cluster.<br>Find yourself a vendor who you have the warm-and-fuzzies with.<br>Seriously, as you leftpondians say it is like dating.<br>Also speak with the other researchers who are running thes models - maybe they behave well with Infiniband interconnects, or work well with Myrinet.<br>
<br>Let's be very honest here - we all have a huge amount of computer power on our desks, many times that of the original Cray 1 systems. The art is to install, care for and to run Beowulf class systems efficiently. Yes, for certain algorithms and certain problems CUDA and Firestream accelerate things by 10, 20...100 times. But don't lose track of the amount of power in the current generation - and imminent Shanghai and Nehalem systems.<br>
<br><br></div></div>