[Beowulf] Setting memory limits on a compute node

Mikhail Kuzminsky kus at free.net
Thu Jun 10 08:00:15 PDT 2004

According to Brent M. Clements
> We have a user who submits a job to a compute node.
> The application is gaussian. The parent gaussian process can spawn a few
> child processes. It appears that the gaussian application is exhausting
> all of the memory in the system essentially stopping the machine from
> working. You can still ping the machine but can't ssh. Anyway's I know the
> fundementals of why this is happening. My question, is there any way to
> limit a user's total addressable space that his processes can use so that
> it doesn't kill the node?
  This situation may depends strongly from real method of calculation used
in frames of Gaussian (and may be from objects of calculations, i.e. molecules).
We work w/G98 (I beleive G03 will have the same behaviour) jobs and didn't
have like problems. 
  You may try to restrict (if it's really necessary) the memory used for
particular Gaussian job by means of setting up of %mem value in the input
Gaussian data; there is also default settings for %mem value in gaussian
 configuration file. G98 can't exceed %mem value.

  We inform our G98 users about upper limit of %mem value which don't leads
to high paging. You may also try to setup ulimit/limit values for stack and data
in the shell script used for G98 job submitting .

Mikhail Kuzminsky
Zelinsky Institute of Organic Chemistry

More information about the Beowulf mailing list