[Beowulf] cloudy HPC?
Jörg Saßmannshausen
j.sassmannshausen at ucl.ac.uk
Thu Jan 30 13:24:41 PST 2014
Hi Mark,
interesting thread, specially as I wanted to ask something similar.
We are currently looking here into the possibility to do an off-site data
centre for ??? (we don't know yet) and we also have to build a new data centre
as the old one is in the way of a to be build (?) train line.
Anyhow, similar to Mark's question: if you want to build a data centre from
scratch in London, i.e. a densely populated area where floor space is an issue,
what would be important to you?
Virtualisation (for HPC number crunching)?
Energy saving?
50 kW racks?
Two data centres for resiliance/fail-over?
Trying out new and emerging technologies like the Iceotope cooling?
Data security?
Or forget all of that and use the cloud for number crunching HPC?
I know all of that is somehow important.
As Mark is asking similar questions I thought we might merge that into one
thread instead of two interwoven ones.
All the best from a nearly flooded UK :D
Jörg
On Donnerstag 30 Januar 2014 Mark Hahn wrote:
> Hi all,
> I would be interested to hear any comments you have about
> delivering HPC services on a "cloudy" infrastructure.
>
> What I mean is: suppose there is a vast datacenter filled
> with beautiful new hosts, plonkabytes of storage and all
> sitting on the most wonderful interconnect. One could run
> the canonical HPC stack on the bare metal (which is certainly
> what we do today), but would there be any major problems/overhead
> if it were only used to run VMs?
>
> by "HPC services", I mean a very heterogenous mixture of
> serial, bigdata, fatnode/threaded, tight-coupled-MPI, perhaps
> even GP-GPU stuff from hundreds of different groups, etc.
>
> For instance, I've heard some complaints about doing MPI on
> virtualized interconnect as being slow. but VM infrastructure
> like KVM can give device ownership to the guest, so IB access
> *could* be bare-metal. (if security is a concern, perhaps
> it could be implemented at the SM level. OTOH, the usual sort
> of shared PaaS HPC doesn't care much about interconnect security...)
>
> I'm not really interested in particulars of, for instance,
> bursting workloads using the EC2 spot market. I know the numbers:
> anyone with a clue can run academic/HPC-tuned facilities at a
> fraction of commercial prices. I also know that clusters and
> datacenters are largely linear in cost once you get to a pretty
> modest size (say 20 racks).
>
> If you're interested in why I'm asking, it's because Canada is
> currently trying to figure out its path forward in "cyberinfrastructure".
> I won't describe the current sad state of Canadian HPC, except that
> it's hard to imagine *anything* that wouldn't be an improvement ;)
> It might be useful, politically, practically, optically, to split
> off hardware issues from the OS-up stack. Doing this would at the
> very least make a perfectly clear delineation of costs, since the
> HW-host level has a capital cost, some space/power/cooling/service
> costs, no software costs, and almost no people costs. the OS-up part
> is almost entirely people costs, since only a few kinds of research
> require commercial software.
>
> thanks, Mark Hahn.
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
--
*************************************************************
Jörg Saßmannshausen
University College London
Department of Chemistry
Gordon Street
London
WC1H 0AJ
email: j.sassmannshausen at ucl.ac.uk
web: http://sassy.formativ.net
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
More information about the Beowulf
mailing list