[Beowulf] Supercomputers - iPad versus Cray
Mark Hahn
hahn at mcmaster.ca
Thu Mar 8 10:30:57 PST 2012
>> http://www.theregister.co.uk/2012/03/08/supercomputing_vs_home_usage/
>>
>> A rather nice Register article on costs for supercomputers, adjusted to
>> 2010 dollars,
he really should have talked to someone who knows computers first, though.
lot of embarassing nonsense in that article (including how you could possibly
spend $10k on a dual-socket box. or why you'd choose an ipad (65% profit
margin, and whose BOM is mainly display.))
> Without being overly pessimistic, I get a vibe of "contrived" throughout
> the article.
as opposed to other reg articles? ;)
> For instance, as you mention, the supers are CPI adjusted
> for present day worth, which is quite interesting to see the trend of
> increasing cost for supers. However, it just ends there -- our
> curiosity as to the much more important question, why that trend exists,
> goes unsatisfied.
supers are in a kind of crazy arms race. what I'd really like to see is an
article that explores exactly what code runs on the largest (say top100)
computers. I'm not saying I disbelieve any code would scale to 700k cores
(K computer) just that I don't know of any science that would. I'm certainly
first to admit I don't know anything about world-class HPC (Canadian HPC
is pretty much a flop, no pun intended) but would science be better off
with 100x 7k-core centi-K computers? I suspect so.
there are a couple premises that should be questioned:
- are scale-limited problems where the interesting science is? I talk to
cosmologists a lot - they seem to be many orders of magnitude from being
able to resolve their physics. I'm not sure the same applies to MD,
q-chem, sequence analysis, etc. I'm also not sure that just because a
field is scale-limited, that's where the effort/money should go.
- is there some assumption that larger computers provide economies of scale?
surely this is untrue: past a certain point, larger computers require more
overhead (interconnect is often nonlinear, practical issues require greater
attention to cooling, density, and reliability dictates that you simply can't
use the same parts at 700kcore as you can at 7kcore.)
>> And a rather interesting cost per megaflop table on the second page.
>
> I actually hate this table. If we want to compare flops/cost between
> machines, we should at least level the playing field: find Linpack
> numbers on individual machines in the listed clusters and compare them
> to your desktop or iPad.
using iPad is just stupid, since it's an embarassingly expensive piece of
eyecandy, not a computer. (yeah, yeah, you might love yours, but you still
wrote a cheque for 65% of what you paid that landed directly on Apple's big
pile of cash.)
there are dozens of low-overhead chips out there that make a better comparison.
> After all, in the latter cases, we have no
> interconnect issues to worry about (just imagine Linpack numbers over 3G
> or even Wifi to other iPads...probably worse for 2 than for 1). I think
it's not worth thinking about, wifi being half-duplex.
More information about the Beowulf
mailing list