[Beowulf] Clearing out scratch space

Nick Evans nick.c.evans at gmail.com
Tue Jun 12 05:54:14 PDT 2018


at {$job -1} we used local scratch and tmpwatch. This had a wrapper script
that would exclude files and folders for any user that currently running a
job on the node.

This way nothing got removed until the users job had finished even if they
hadn't accessed the files for a while and you don't have predict how long a
job could run for..

On Tue, 12 Jun 2018 at 22:21, Skylar Thompson <skylar.thompson at gmail.com>
wrote:

> On Tue, Jun 12, 2018 at 10:06:06AM +0200, John Hearns via Beowulf wrote:
> > What do most sites do for scratch space?
>
> We give users access to local disk space on nodes (spinning disk for older
> nodes, SSD for newer nodes), which (for the most part) GE will address with
> the $TMPDIR job environment variable. We have a "ssd" boolean complex that
> users can place in their job to request SSD nodes if they know they will
> benefit from them.
>
> We also have labs that use non-backed up portions of their network storage
> (Isilon for the older storage, DDN/GPFS for the newer) for scratch space
> for processing of pipeline data, where different stages of the pipeline run
> on different nodes.
>
> --
> Skylar
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20180612/a32646a4/attachment.html>


More information about the Beowulf mailing list