[Beowulf] Interactive vs batch, and schedulers
Alex Chekholko
alex at calicolabs.com
Thu Jan 16 15:50:03 PST 2020
Hey Jim,
There is an inverse relationship between latency and throughput. Most
supercomputing centers aim to keep their overall utilization high, so the
queue always needs to be full of jobs.
If you can have 1000 nodes always idle and available, then your 1000 node
jobs will usually take 10 seconds. But your overall utilization will be in
the low single digit percent or worse.
Regards,
Alex
On Thu, Jan 16, 2020 at 3:25 PM Lux, Jim (US 337K) via Beowulf <
beowulf at beowulf.org> wrote:
> Are there any references out there that discuss the tradeoffs between
> interactive and batch scheduling (perhaps some from the 60s and 70s?) –
>
> Most big HPC systems have a mix of giant jobs and smaller ones managed by
> some process like PBS or SLURM, with queues of various sized jobs.
>
>
>
> What I’m interested in is the idea of jobs that, if spread across many
> nodes (dozens) can complete in seconds (<1 minute) providing essentially
> “interactive” access, in the context of large jobs taking days to
> complete. It’s not clear to me that the current schedulers can actually
> do this – rather, they allocate M of N nodes to a particular job pulled out
> of a series of queues, and that job “owns” the nodes until it completes.
> Smaller jobs get run on (M-1) of the N nodes, and presumably complete
> faster, so it works down through the queue quicker, but ultimately, if you
> have a job that would take, say, 10 seconds on 1000 nodes, it’s going to
> take 20 minutes on 10 nodes.
>
>
>
> Jim
>
>
>
>
>
> --
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200116/e109f1de/attachment-0001.html>
More information about the Beowulf
mailing list