[Beowulf] Interactive vs batch, and schedulers

Scott Atchley e.scott.atchley at gmail.com
Fri Jan 17 06:42:31 PST 2020

Hi Jim,

While we allow both batch and interactive, the scheduler handles them the
same. The scheduler uses queue time, node count, requested wall time,
project id, and others to determine when items run. We have backfill turned
on so that when the scheduler allocates a large job and the time to drain
those nodes, it schedules smaller jobs in its footprint as long as their
requested wall time would end before the last node becomes available. We
also have a queue that is preemptable that can run in the backfill window.

While not addressing your concern directly, we found that scheduling large
and small jobs slightly differently makes a difference. The scheduler
typically has a list that enumerates the nodes. We changed the scheduler to
use the list as usual for large jobs but changed it to use the list in
reverse so that small jobs are placed at the "end" of the list. The paper
is A multi-faceted approach to job placement for improved performance on
extreme-scale systems <https://dl.acm.org/doi/abs/10.5555/3014904.3015021>.
When we started seeing GPU failures and we replaced half the GPUs, we
modified the scheduler's list to schedule large, GPU jobs on the new GPUs
and small jobs and CPU-only jobs on the nodes with old GPUs. That paper is GPU
age-aware scheduling to improve the reliability of leadership jobs on Titan.
<https://dl.acm.org/doi/abs/10.1109/SC.2018.00010> You might be able to
modify these techniques to help your situation.


On Thu, Jan 16, 2020 at 6:25 PM Lux, Jim (US 337K) via Beowulf <
beowulf at beowulf.org> wrote:

> Are there any references out there that discuss the tradeoffs between
> interactive and batch scheduling (perhaps some from the 60s and 70s?) –
> Most big HPC systems have a mix of giant jobs and smaller ones managed by
> some process like PBS or SLURM, with queues of various sized jobs.
> What I’m interested in is the idea of jobs that, if spread across many
> nodes (dozens) can complete in seconds (<1 minute) providing essentially
> “interactive” access, in the context of large jobs taking days to
> complete.   It’s not clear to me that the current schedulers can actually
> do this – rather, they allocate M of N nodes to a particular job pulled out
> of a series of queues, and that job “owns” the nodes until it completes.
> Smaller jobs get run on (M-1) of the N nodes, and presumably complete
> faster, so it works down through the queue quicker, but ultimately, if you
> have a job that would take, say, 10 seconds on 1000 nodes, it’s going to
> take 20 minutes on 10 nodes.
> Jim
> --
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200117/84ca34cb/attachment.html>

More information about the Beowulf mailing list