[Beowulf] Docker in HPC

Peter Clapham pc7 at sanger.ac.uk
Wed Nov 27 07:49:14 PST 2013


Not at all. The restriction / affinity of jobs / process to a given core 
or core subset is very much in mind. Memory management is also 
potentially rather useful. In the case of most schedulers the memory 
used is obtained via a poll report.

The enforcement of the memory limit has to date either been via wrapping 
jobs on startup by the scheduler with ulimit or via a local daemon 
sending a kill command when it notices that the job or job component 
exceeded the initial set limits.

Both the above approaches have limitations which can confuse users. The 
CGROUP approach seems to effectively take on the roll of ulimits on 
steroids and allows for accurate memory tracking and enforcement. This 
ensures that the job output includes the actual memory usage when killed 
as well as ensuring that the job cannot break the set limits.

Pete


On 27/11/13 15:39, John Hearns wrote:
> I use cpusets very successfully.
> I rather idly wonder if on a cluster with manycore nodes (such as we 
> have these days) if cgroups should be used to keep the OS processes on 
> the first core,
> and  as Igor says let the scheduler run the applications in separate 
> cgroups.
> The aim being to reduce 'OS jitter'  .
> I suppose it depends on the application being run of course.
> Apologies if I am, yet again, wittering.
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf




-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20131127/93aaca3c/attachment.html>


More information about the Beowulf mailing list