[Beowulf] Cloud / HPC

Mark Hahn hahn at mcmaster.ca
Tue Apr 16 21:28:02 PDT 2013

> http://www.admin-magazine.com/HPC/articles/the_cloud_s_role_in_hpc

I had a very hard time with this article.

in the "Massively Concurrent Runs" section, I really couldn't parse it 
as anything other than a devil's advocacy of people getting access to 
big clusters.  it would be great if every researcher wanting to perform
50k 2-minute runs had access to 50k cores all at once.  would such a 
researcher really disregard a 120->100 second runtime improvement?

but wait, is the message actually that paying a superlinear amount 
for faster processors is bad?  well sure, but who is proposing that?
if someone's research requires booting a unique kernel for each of many test
cases, what's the big deal?  you pay for your time on the hardware. 
again, I couldn't really pin down an actual point here, except that 
it would be great if everyone had plentiful-sized clusters. 
a chicken in every pot!

and yes, massive bursts of jobs should take place on shared resources, 
so spikes can be interleaved.  but come on, that's what shared HPC 
clusters (aka PaaS research clouds) have been doing many years.

the Web Services section seems to be just an appeal for rapid and
low-friction provisioning.  who would ever argue against such a thing?

is "Research Computing" a better name?  sure, HPC hasn't been about 
extremely fast clocks since Cray days.  I always tell people: HPC is 
for when you want more than you can comfortably put on your desktop.
more what?  more anything: cores, memory, nodes, network, storage.

virtualizing in order to overcommit doesn't make much sense unless
your workload is not bottlenecked by any resource.  huh?

Cycle Compututing is great, but their example is using spot prices,
which afaikt are not in equilibrium.  try those runs with on-demand
or reserved instances, and the numbers will tell a different story.

the "For comparison purposes" example is just too random to criticize.

in short: yes, shared resources are great for bursty workloads.
I'm quite comfortable calling what we do "research cloud".

regards, mark hahn.

More information about the Beowulf mailing list