Commodity supercomputing, was: Re: NDAs Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?
Gerry Creager
gerry.creager at tamu.edu
Mon Jun 30 16:09:26 PDT 2008
Joe Landman wrote:
>
>
> Gerry Creager wrote:
>
>> I'm running WRF on ranger, the 580 TF Sun cluster at utexas.edu. I
>> can complete the WRF single domain run, using 384 cores in ~30 min
>> wall clock time. At the WRF Users Conference last week, the number of
>> folks I talked to running WRF on workstations or "operationally" on
>> 16-64 core clusters was impressive. I suspect a lot of desktop
>> weather forecasting will, as you suggest, become the norm. The
>> question, then, is: Are we looking at an enterprise where everyone
>> with a gaming machine thinks they understand the model well enough to
>> try predicting the weather, or are some still in awe of Lorenz'
>> hypothesis about its complexity?
>
> I see a curious phenomenon going on in crash simulation and NVH. We see
> an increasing "decoupling" if you will, between the detailed issues of
> simulation and coding, and the end user using the simulation system.
> That is, the users may know the engineering side, but don't seem to
> grasp the finer aspects of the simulation ... what to take as reasonably
> accurate, and what to grasp might not be.
>
> I don't see this in chemistry, in large part due to many of the users
> also writing their own software.
>
> I think this "decoupling" where developers and users knowledge starts
> diverging is both simultaneously painful for the "older" crowd of
> developer/users, and opens up interesting opportunities for new users.
> Basically it commoditizes the ability to run the codes. The question is
> whether or not you can provide better guidance to the users about the
> likelihood of it being a reasonable run, while abstracting away the
> details of the run.
In my world, we discuss (often amongst ourselves) the concept of
forecast uncertainty. In point of fact, an ensemble, where we tweak
initial conditions, or tweak physics parameters (maintaining initial
conditions; don't ask what happens when one tries to tweak both physics
and initial conditions, but I've had someone try recently) is the
general vehicle we use to document our uncertainty: each member's
variation isn't "improving the model average" but demonstrating a
variance from a central condition.
Among the "users", though, they often look at a group of models,
different initializations, different physics, different results, and
assume that a good forecast is a simple averaging of these.
Therein lies the fundamental difference between the modelers and the
users: The modelers generally have a feel for the weakness of the model
while a lot of users have no such concern. The moel is a black box, and
since it produces a numerical result, it's automatically right. They
don't see different numerical results as a sign that we know our models'
limitations and seek to present said limitations, but that by doing more
model runs, we are strengthening our result.
In some ways, both groups are converging, but in the groups I still work
with, and present to, the level of confidence in my WRF exceeds what I
consider prudent.
gerry
--
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843
More information about the Beowulf
mailing list