[Beowulf] Utility Supercomputing...
James Cuff
james.cuff at cyclecomputing.com
Fri Mar 1 09:03:23 PST 2013
Hey team Beowulf,
So most of you know that I headed out from being the research computing guy
at Harvard to join Cycle Computing last month. It's been a fun first few
weeks, what with 10,000+ server instances pinging up in hours flat and
doing some stunning science to boot!
Anyway, I noticed the "Utility Supercomputing" concept had been written up
recently over at HPCwire:
http://www.hpcwire.com/hpcwire/2013-02-28/utility_supercomputing_heats_up.html
I, as most of you, always give a big hairy technical eyeball to any
statements that include MPI and "cloud". I know I'm biased, but I do think
Jason does a great job of explaining "the bench", i.e. never assume raw
horsepower until you test it! Always reminds me of those 1,000bhp motors
that are only great in straight lines ;-) Another thing to think of is
total cost per unit of science. Given we can now exploit much larger
systems than some of us have internally, are we are starting to see
overhead issues of vanish due to massive scale, certainly at cost? I know
for a fact that what we call "Pleasantly Parallel" workloads all of this
holds true, certainly results in lower cost per unit science at massive
scale for those grand challenge "PP" problems.
I personally think the game is starting to change a little bit yet again
here...
So at the risk of being moderately contentious: straw poll - what do we
think as a team about these issues?
j.
Dr. James Cuff
Chief Technology Officer
Cycle Computing
*Leader in Utility Supercomputing and Cloud HPC Software*
cell: 617.429.5138
main: 888.292.5320
skype: cycle.james.cuff
web: http://www.cyclecomputing.com
<http://www.cyclecomputing.com>
<http://www.cyclecomputing.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20130301/753fadcd/attachment.html>
More information about the Beowulf
mailing list