[Beowulf] HPC cloud bursting providers?

Jeff Friedman jeff.friedman at siliconmechanics.com
Thu Feb 23 12:56:43 PST 2017


Thank you all for the info, it is very useful.  It seems most of the cloud orchestrator software includes a bit more functionality than we need. We want to use the standard HPC provisioning, scheduling, and monitoring software, and just automate the setup and presentation of the cloud nodes. We are looking into establishing a VPN to AWS, and then continuing to see what software would do the best job of the automated setup/teardown of cloud resources. We are looking at just using AWS CloudFormation as an option. There is also Bright Computing Cluster Manager, Cycle Computing, RightScale, and a couple others. But again, I think these are a bit to robust for what we need.  I’ll keep y’all posted if interested.

Thanks again!

Jeff Friedman
Sales Engineer
o: 425.420.1291
c: 206.819.2824
www.siliconmechanics.com


On Feb 23, 2017, at 10:49 AM, Lev Lafayette <lev.lafayette at unimelb.edu.au> wrote:

On Wed, 2017-02-22 at 10:02 +1100, Christopher Samuel wrote:
> On 21/02/17 12:40, Lachlan Musicman wrote:
> 
>> I know that it's been done successfully here by the University of
>> Melbourne's Research Platforms team - but they are bursting into the non
>> commercial Aust govt Open Stack installation Nectar.

In context that was after (a) small test cases of cloud bursting worked
and (b) cloud bursting was used to replace our existing cloud partition.

> So now they just provision extra VM's when they need more and add them
> to Slurm and given demand doesn't seem to go down there hasn't been a
> need to take any away yet. :-)

Watch this space ;)

> So this doesn't really reflect what Jeff was asking about as it's all
> the same infrastructure, it's not hitting remote clouds where you have
> to figure out how you are going to see your filesystem there, or how to
> stage data.
> 

Very much so. The ability to set up an additional partition to external 
providers (e.g., amazon, azure, any openstack provider) is much less of a 
problem that the interconnect issues which are quite significant. 


All the best,


-- 
Lev Lafayette, BA (Hons), GradCertTerAdEd (Murdoch), GradCertPM, MBA
(Tech Mngmnt) (Chifley)
HPC Support and Training Officer +61383444193 +61432255208
Department of Infrastructure Services, University of Melbourne

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list