Commodity supercomputing, was: Re: NDAs Re: [Beowulf] Nvidia,
cuda, tesla and... where's my double floating point?
landman at scalableinformatics.com
Mon Jun 30 13:38:10 PDT 2008
Gerry Creager wrote:
> I'm running WRF on ranger, the 580 TF Sun cluster at utexas.edu. I can
> complete the WRF single domain run, using 384 cores in ~30 min wall
> clock time. At the WRF Users Conference last week, the number of folks
> I talked to running WRF on workstations or "operationally" on 16-64 core
> clusters was impressive. I suspect a lot of desktop weather forecasting
> will, as you suggest, become the norm. The question, then, is: Are we
> looking at an enterprise where everyone with a gaming machine thinks
> they understand the model well enough to try predicting the weather, or
> are some still in awe of Lorenz' hypothesis about its complexity?
I see a curious phenomenon going on in crash simulation and NVH. We see
an increasing "decoupling" if you will, between the detailed issues of
simulation and coding, and the end user using the simulation system.
That is, the users may know the engineering side, but don't seem to
grasp the finer aspects of the simulation ... what to take as reasonably
accurate, and what to grasp might not be.
I don't see this in chemistry, in large part due to many of the users
also writing their own software.
I think this "decoupling" where developers and users knowledge starts
diverging is both simultaneously painful for the "older" crowd of
developer/users, and opens up interesting opportunities for new users.
Basically it commoditizes the ability to run the codes. The question is
whether or not you can provide better guidance to the users about the
likelihood of it being a reasonable run, while abstracting away the
details of the run.
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 734 786 8452
cell : +1 734 612 4615
More information about the Beowulf