[Beowulf] MS HPC... Oh dear...
landman at scalableinformatics.com
Mon Jun 12 18:36:23 PDT 2006
Robert G. Brown wrote:
> WinXX clusters would have to produce a really tremendous advantage in
> application, and I just don't see it ever doing so. Joe seems to think
> that they'll get traction by defining an MPI ABI -- I think that they're
Actually I think they will get traction from their infinite marketing
dollars, some high profile wins, and pressure upon the C-lettered people
to enforce platform monoculture. I doubt it will lower their support
costs (will raise them quite a bit IMO), but thats what will be fed to
the C-level folks.
My point about MPI is that they are going to do what they can to make it
easy, and make it just work. While I like the concept of making it just
work, there is a cost to participating in that model that I have not
> market presence will simply fracture the existing efforts to define one
> even more. It isn't like a WinXX binary is going to run on Linux,
> right? Windows AND MPI will be just like Linux AND any existing MPI, at
> best. And besides, I personally think that it is API that counts, not
> ABI, except maybe possibly at the hardware driver level. Is anyone
That was my point. I would like to see the API/ABI to the point where,
when Greg's company comes out with Infinipath 2007++ with ESP technology
to send the packet your code was thinking about sending, but never got
around to sending, that we don't need to relink code to use it. Drop
the driver in, have the ABI do its magic, with some possibly environment
tunable parameters, and start the computing engines. Pushes more work
to NIC/data pump vendors, but it isn't so terrible, and it makes other
peoples lives soooooo much better (wrestling with an obscure version of
LAM that doesn't cause segfaults with LS Dyna is *not fun*(TM).
> forseeing Myricom abandoning the Linux market? Quadrics? Infiniband?
> Yeah, right...
> And the whole point of MPI in the first place was to precisely counter
> any effort by a single company to introduce proprietary crap that adds
> to the cost of software ports or maintenance. Does anyone think that
Unfortunately, we are at the point where each vendor has to ship (most
of) the MPI they built with in order to make sure the customer can run
it, and worse, the new fangled hardware (see the imagined Infinipath
2007++ with ESP technology that can get negative latencies) doesn't work
with it, so ...
This frustrates software builders, and end users. My point is that
there is a better way, and Greg indicated that he had supported/proposed
Windows could get traction by making this stuff easier. They have done
it before with other things. Note: easier != better in all cases.
> choices of linux clusters do now. Cluster scaling is far and away
> dominated by HARDWARE resources and scaling, not software. So it will
Hmmm.... most of the apps I have seen are software bound at some point.
Some scale really well, but those are rare. 16-32 way runs are fairly
typical at customer sites. A few do more, many do less.
> come right down to trading cluster nodes for Windows licenses unless
> they drop the cost of the latter to literally nothing. And if they do
> that, what's the point?
The quality of the software is what dominates performance, and the price
of the software limits practical scalability. At 10k$/node, the
software will far outstrip the hardware in terms of price scaling.
That said, the cost of the windows solution will increase the cost of
the cluster in a critical price sensitive area. Given the hardware
margins are very low in the area that WCC targets, there is very little
room to accommodate this extra cost. Assume hardware costs of
$2500/node roughly. 16 nodes (32 CPUs) would cost 40k$. Now add the 8k$
that Microsoft wants from this. That is a 20% increase in cost. What
does the customer get for that extra 20%? Will the software cost 20%
less per node? Not likely. Will the hardware vendors operating in the
3-8% margin region take 20% off? Heck no.
Here is where the CBA makes sense to do. What is the value of the extra
20% as compared to the alternative solution? What do you get for it?
Exactly what pain is the extra 20% solving? Note that this is not
really the case, as you need a $50 version of Norton per machine, so
thats another $800. And how long will these machines be down on/after
So the question is, is all the extra cost worth having an MPI that just
works? Is MPI that painful (ok it can be really annoying sometimes when
you are debugging a problem, but usually, once you fix it, it stays
fixed). Is all that cost worth having the same exact administration
model for the laptop as for the research/engineering supercomputer? I
am not convinced.
>> closer to "hrm... $8,000 and less headache with MS than with going to
>> a linux system... It's worth it."
> Why less headache? Let's see.
I disagree that it is less headache to go with the monoculture. Show me
a linux admin who is pulling out their hair on patch tuesday through the
subsequent patch tuesday due to issues that the last patch bolus caused.
Most of the linux admins I know at fortune 500's get pulled over to
help out with the windows side before during and after patch tuesday, as
the admins simply cannot handle the number of problems that arise. I
called this out as an example of how I believe Microsoft misunderstood
Moreover, on every node you need an antivirus/firewall. Corporate
mandates that for every windows PC regardless of function. Have seen
lots of that as well.
My point is that unless they did something spectacular so that WCC is
virus/work repellent, I think the problem is only going to be
exacerbated. Worse, there are companies for which their computing
cluster machines being down for time scales on the order of hours could
mean significant money lost. Many of them now have farms of Linux
clusters, and I would be surprised to see them adopt a new platform.
Downtime costs real money. Patching and protecting costs real money.
Adds to admin overhead, reduces duty cycle.
> Software maintenance? Competing with yum and the repo mirror tree (as
> just one example)?
OT: I am happy to report that SuSE 10.1 has a working yum (not the one
I hacked together for 10.0 and 9.3), and that I have created a repo for
it, and will be doing some warewulf vnfs test ports/builds (woot!).
Our major port of the ww-2.6.2 is on our download web site in src.rpm
and x86_64.rpm form.
Note: with this, I could (easily) build VMware players, run Linux
diskless and VMWare running off a disk image hosted locally with backup
copies on a remote server somewhere. This would be a "workable" windows
cluster model. You wouldn't have to run an antivirus, or a firewall.
Yes you pay the cost of performance in virtualization. The ease of
admin can't be beat though. A windows node gets hosed, and you kill the
VMware player, copy an up-to-date version of the disk image over, and
reboot VMware. Could even do it while the errant VMware is still
running, as long as you use a different file name for the disk image.
> And the list goes on. Not to mention the "obvious" point that it is
> EXPENSIVE to port software to a new platform. Nobody will do this
> unless there are clear and unmistakable benefits, not just a much-hyped
> appearance of Microsoft in a market they've wisely avoided for years.
If the porting environment is made very easy to port to (e.g. little
effort, codes run with a simple recompile) then I expect to see more
> This is why I think that it is all about something else. Suturing a
> bleeding wound in public relations, supplying a limited market for small
> clusters, supplying an expensive and profitable model for turnkey
> bioinformatics clusters. Don't look for them in places where people
Not sure I agree with this (expensive/profitable model for informatics
> have to write their own code, or use a widely shared open source code
> base. And it is not without its risks. If they fail, their bulletproof
> image will be severely shaken. If they succeed, they risk their client
> server profit margins, as a cluster ain't nothing but a fancy client
> server model.
Hmmm ... see http://en.wikipedia.org/wiki/Creative_destruction and
Clayton Christensen's Innovator's Dilemma
http://www.claytonchristensen.com/publications.html . If you don't
cannibalize your own market, then your competitor surely will. Linux is
eating a growing portion of Microsoft's lunch. Microsoft needs to work
out how to respond. This is IMO one of the responses.
> As I said, ROTFL. That works fine for numb-nuts spending $500. It
> doesn't work that well for corporate or government decision makers
> controlling the disposal of $500,000, where the question is whether it
> buys (say) 2000 Linux nodes or 1000 Microsoft HPC nodes. Somebody's
Note to self: Find out who the heck is selling Opteron servers for
$250/node (see RGB's math above). :)
>> Any more, the folks coming out of college have virtually no *nix
>> experience. Universities are pushing Windows OS and development
>> like there's no tomorrow. While there are many instances of universities
> Not here. Not anywhere I know of. Java, yes. Web stuff, yes.
> Honestly, Universities aren't even pushing compilers and real
> programming that much any more from what I see.
[switching hats for a moment] When I taught a class this past year at
my alma mater on HPC, a single student in the class had *nix experience.
Few had programming experience outside of Matlab or C++. Fortran?
They don't do no steenkeen 52 year old computer languages ... It is so
Most did Java. All did windows. The CLI was a massive shock to their
systems. That you could work on your assignments from home and run them
on a machine miles away was either a pleasant or scary surprise. These
are the scientists and computer scientists of tomorrow. All they know
is visual studio, java, and other similar things.
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 734 786 8452 or +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf