[Beowulf] [gregory.brittelle at kirtland.af.mil: Re: Intel?]
Jim Lux
James.P.Lux at jpl.nasa.gov
Thu Jun 9 11:54:23 PDT 2005
At 08:38 AM 6/9/2005, Joe Landman wrote:
>Yup. This is part of the reason I was asking so many questions a while
>ago on price points for high performance bits (hardware, software,
>otherwise). The cell processor bits on a card in a PCI-e slot: what
>would a person pay for hardware like that (if it gave their application a
>10-100x speedup)? Would they pay 10k$ (2 high end compute nodes today for
>10-100x faster performance)? Would they pay $5000? Same question for
>software. If you have software which makes your life dramatically
>better, what are you willing to pay for it?
>
>I don't think there is anything wrong with 100-500$/seat for really good
>high quality software that strongly positively impacts your ability to get
>work done (much like the vaporous hardware above). I am not sure I would
>spend more than 500$ unless there was a really good business case. And
>this gets to a point you have made in the past. How do software vendors
>price things for clusters? Per seat will be painful for consumers. Per
>user will be less painful, but also reduce usage. Per cluster is probably
>a better compromise (consider the cluster to be one machine, and the
>compute nodes are not independent machines). Of course, this doesn't fit
>in most current software vendors pricing models.
There are software packages for which a good business case can be made at a
substantially higher price point. RF design or IC layout are cases in
point. The software is very difficult to develop (once you get past
trivial cases, which are available for free) and validate, and then there's
the whole "device model" aspect. However, the package can easily save 10%
of the engineer's time (if not more... some jobs just cannot be done with
"pencil and paper and free tools"). I'm thinking here of products like
Ansoft's HFSS, or Agilent ADS, or other similar products. The total market
is also kind of small (hundreds, or thousands at most) although one could
argue that if it were cheap enough, there might be more demand, but even
then, the package is complex enough and addresses an esoteric enough need
that even if free, the total number of customers would be small. So, if
you're going to spend a few tens of millions of dollars developing the
software (uh huh... 20,30,..100 electromagnetics folks, software people,
etc., at $250k/yr to develop the underlying codes, the useful interfaces,
the libraries, and support those other $250k/yr users out in the field),
and a few kilobucks a copy is actually a pretty good deal.
However, the entire business case rests on making an engineer more
efficient, so the logical way to price it is by user seat, not by the
number of processors that the engineer happens to use. The big benefit
comes from having the tool at all, and the incremental advantage from
having the tool run faster (as on a cluster) is substantially smaller
(although not zero...). There are "big" problems for which one can cost
justify spending more to solve it faster (turning a 1 week simulation into
an hour long run is a very worthwhile cost savings, esp if that $250k/yr
engineer is sitting there waiting for the results), but those are even
fewer. Here though, cheap high performance would change how many big
problems even get tackled in the first place, and, it might well be that
the ability to do finer gridding with a simpler model would reduce the
amount of hand tuning required for model building at a coarser grid.
>[...]
>
>I hope no one writes of Microsoft, as they have shown a remarkable
>propensity to adapt. Their model is under assault, and the economy
>appears to be changing in such a way to disfavor their business model.
>Doesn't mean they are going out of business, but I do expect them to adapt.
>
>That said, I haven't seen/heard about the economics of the windows HPC
>solution. If it is like their desktop system model (price per seat,
>limited number of client connectivity), it is designed to fail.
The limited rumblings I have heard on where MS is heading has to do with
providing "web services". This doesn't have to do with "web sites" per se,
but the idea is that a application cares not whether a particular
capability is provided locally or remotely, and the transport medium is the
web. For instance, you might have a web service that provides an image of
a recorded loan document in response to a loan number. Or, perhaps a web
service that takes care of recording a deed. The former case might be
entirely within your company, and is essentially a way of making the
interface abstract and transparent to the using application. The latter
case might be representative of every county recorder providing a
web-services interface for their own peculiar way of recording loans, and
my application would just go out and work against county A or county B's
webservice.
Looking at a more mundane example, say I've got some sort of software
function in my application (calculating tax withholding, for instance). If
the tax tables change, I have to go into my code and change that
appropriate part (not a overly complex thing, since I've presumably
modularized my code). I have the responsibility of monitoring when tax
tables change, rolling out the changes to all my users, etc. On the other
hand, if the IRS provides the withholding calculation as a webservice, I
just invoke it, and it's done. As far as actual implementation goes, there
might actually be a copy of the tables and algorithms on my computer, and
the work of calculating is done locally, but the webcomponent hides the
details of making sure it's up to date, collecting a new version if needed,
etc. Or, it might be entirely remote, with all the calculations done by the
service provider.
MS would love to get into this model because it lends itself to "per use"
pricing. Rather than sell you a copy of MSWord, and then have to deal with
hoping you do updates periodically, and suffer through your whining about
not supporting the version that is 10 versions old, etc. They would just
provide a MSWord service. All the maintenance and updates and
compatibility issues are hidden.
They're getting close to this nowadays with "WindowsUpdate", which is
fairly automated. It's not too far in the future when you'll basically pay
by the year or month to use Windows, and you'll always effectively have the
latest version. There also won't be any way for YOU as the end user to go
in and change configurations.. you'll take what's provided and that's
it. When your computer gets too old to support the current version, you'll
just have to buy a new computer.
Naturally, this whole process, if it's to fly, has to make Corporate IT
folks confident that it will work reliably. Right now, most large
companies (MS bread and butter in sales) have a staged rollout and
verification process where they rigorously test a service pack before
rolling out it out to all the desktops in the company. They hold off
installing everything that comes along, to reduce the risk of configuration
control nightmares and mystery incompatibilities. Typically, these
rollouts occur every few months or maybe once a year, because of the huge
verification work involved. MS has to make the IT folks confident that
they don't need to do all that verification.
James Lux, P.E.
Spacecraft Radio Frequency Subsystems Group
Flight Communications Systems Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875
More information about the Beowulf
mailing list