[Beowulf] Bright Cluster Manager

Chris Samuel chris at csamuel.org
Fri May 4 07:36:03 PDT 2018


On Thursday, 3 May 2018 11:53:14 PM AEST John Hearns via Beowulf wrote:

> The best successes I have seen on clusters is where the heavy parallel
> applications get exclusive compute nodes. Cleaner, you get all the memory
> and storage bandwidth and easy to clean up. Hell, reboot the things after
> each job. You got an exclusive node.

You are describing the BlueGene/Q philosophy there John. :-)

This idea tends to break when you throw GPUs in to the mix as there 
(hopefully) you only need a couple of cores on the node to shovel data around 
and the GPU does the gruntwork.  That means you'll generally have cores left 
over that could be doing something useful.

On the cluster I'm currently involved with we've got 36 cores per node and a 
pair of P100 GPUs.  We have 2 Slurm partitions per node, one for non-GPU jobs 
that can only use up to 32 cores per node and another for GPU jobs that has no 
restriction.   This means we always keep at least 4 cores per node free for 
GPU jobs.

All the best,
Chris
-- 
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC



More information about the Beowulf mailing list