meatheadmerlin at gmail.com
Mon Dec 3 06:11:59 PST 2012
I understand that true exascale would mean a
more tightly knit system functioning as a single unit,
but a distributed workload is in fact distributed
no matter what the topologies or latencies end up being.
And, isn't everyone saying exascale will take different thinking anyway?
Taking that view, one might argue that a network of clusters
like google's or amazon's is already doing exascale level work
of a very distributed workload.
And also, while activities like spamming don't qualify as HPC either,
I should think some botnets have already reached exascale.
I wonder if something couldn't be learned from their
small code and message passing algorithms.
I have also often wondered about the feasibility
of running something like a BOINC-distributed project locally
across all available personal machines in an organization
to accomplish large calculation that were perhaps
embarassingly parallel and not as time-sensitive
as most HPC modeling and crunching seem to be.
It seems quite possibly to setup and run such projects
and still keep all the work "in house."
Just some thoughts from a person not actually doing HPC,
More information about the Beowulf