[Beowulf] Hyperthreading and 'OS jitter'
Evan Burness
evan.burness at cyclecomputing.com
Tue Aug 1 20:37:17 PDT 2017
Thanks for the history lessons, Chris! Very interesting indeed.
Would be interesting to take it a step further and measure what the impacts
(good, bad, or otherwise) of picking a specific core on a given CPU uArch
layout for the OS.
Cheers,
Evan
On Tue, Aug 1, 2017 at 10:32 PM, Christopher Samuel <samuel at unimelb.edu.au>
wrote:
> On 26/07/17 00:31, Evan Burness wrote:
>
> > If I recall correctly, IBM did just what you're describing with the
> > BlueGene CPUs. I believe those were 18-core parts, with 2 of the cores
> > being reserved to run the OS and as a buffer against jitter. That left a
> > nice, neat power-of-2 amount of cores for compute tasks.
>
> Close, but the 18 cores were for yield, with 1 core of running the
> Compute Node Kernel (CNK) and 16 cores for the task that the CNK would
> launch. The 18th was inaccessible.
>
> But yes, I think SGI (RIP) pioneered this on Intel with their Altix
> systems and was the reason they wrote the original cpuset code in the
> Linux kernel so they could constrain a set of cores for the boot
> services and the rest were there to run jobs on.
>
> All the best,
> Chris
> --
> Christopher Samuel Senior Systems Administrator
> Melbourne Bioinformatics - The University of Melbourne
> Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Evan Burness
Director, HPC Solutions
Cycle Computing
evan.burness at cyclecomputing.com
(919) 724-9338
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20170801/8109d803/attachment.html>
More information about the Beowulf
mailing list