[Beowulf] HPC cloud bursting providers?

Lev Lafayette lev.lafayette at unimelb.edu.au
Tue Feb 21 12:22:26 PST 2017

On Mon, 2017-02-20 at 21:10 -0800, Jeff Friedman wrote:
> Thank you for the info, it is helpful. Do you mind if I ask what cluster management software you are using? Where there modifications or special functions needed to include cloud nodes in the cluster? I am trying to envision how the traditional provisioning and management apps would communicate with cloud nodes, since some of the normal network protocols would not be in place (for image installation, remote boot, tftp, etc).

Hi Jeff,

Using the Slurm Workload Manager, the general partitions in place are
for a traditional HPC architecture (the "physical" partition), virtual
machines on the NeCTAR research cloud (the "cloud" partition), one of
two private departmental partitions (the "water" and "ashley"
partitions), a specialist proteomics partition ("punim0095"), and a gpu
partition. Each of these have an nodelist and are generated by virtual
machine images. 

The following short script illustrates how such an image is started:

source /opt/vSpartan/cloud.rc
nova show $INSTANCE >/dev/null 2>&1
if [[ $INSTANCE_LIVE != 0 ]]; then
  PORT_ID=$(neutron port-show -f value -F id
  nova boot --flavor $FLAVOR --image $IMAGE --key-name $KEYNAME
--availability-zone $AVAILABILITY_ZONE --security-groups default --nic
port-id=$PORT_ID --user-data cloudcfg/${INSTANCE}.cfg --hint
different_host=$SPARTAN_MGMT --hint different_host=$SPARTAN_LOGIN
$INSTANCE >>/var/log/slurm/cloudburst.log 2>&1

The cloud.rc file will include the image name, flavour, etc.

So, quite unusually, our "cluster management software" is OpenStack! 

All the best,

Lev Lafayette, BA (Hons), GradCertTerAdEd (Murdoch), GradCertPM, MBA
(Tech Mngmnt) (Chifley)
HPC Support and Training Officer +61383444193 +61432255208
Department of Infrastructure Services, University of Melbourne

More information about the Beowulf mailing list