[Beowulf] Hyperthreading and 'OS jitter'
Evan Burness
evan.burness at cyclecomputing.com
Tue Jul 25 07:31:32 PDT 2017
If I recall correctly, IBM did just what you're describing with the
BlueGene CPUs. I believe those were 18-core parts, with 2 of the cores
being reserved to run the OS and as a buffer against jitter. That left a
nice, neat power-of-2 amount of cores for compute tasks.
Re: having a specialized, low-power core, this is clearly something that's
already been successful in the mobile device space. The big.LITTLE
<https://en.wikipedia.org/wiki/ARM_big.LITTLE> ARM architecture is designed
for this kind of thing and has been quite successful. Certainly, now that
Intel and AMD are really designing modular SoC-like products, it wouldn't
be terribly difficult to bake in a couple of low power x86 cores (e.g. Atom
or Xeon-D + larger Skylake die in Intel's case; Jaguar + Zen in AMD's
case). I'm not an expert in fab economics, but I don't believe it would not
significantly add to production costs.
A similar approach to IBM's (with BlueGene) is what the major public Cloud
providers often do these days. AWS' standard approach is to buy CPUs with
1-2 more cores pr socket than they actually intend to expose to users, and
to use those extra cores for managing the hypervisor layer. As an example,
the CPUs in the C4.8xlarge instances are, in reality, custom 10-core Xeon
(Haswell) parts. Yet, AWS only exposes 8 of the cores per socket to the end
user in order to ensure consistent performance and reduce the chance of a
compute intensive workload from interfering with AWS' management of the
physical node via the hypervisor. Microsoft Azure and Google Compute
Platform often (but not always) do the same thing, so it's something of a
"best practice" among the Cloud providers these days. Anecdotally, I can
report that in our (Cycle Computing's) work with customers doing HPC and
"Big Compute" on public Clouds that performance consistency has improved a
lot over time and we've had the Cloud folks tell us that reserving a few
cores/node was a helpful step in that process.
Hope this helps!
Best,
Evan
On Sat, Jul 22, 2017 at 6:13 AM, Scott Atchley <e.scott.atchley at gmail.com>
wrote:
> I would imagine the answer is "It depends". If the application uses the
> per-CPU caches effectively, then performance may drop when HT shares the
> cache between the two processes.
>
> We are looking at reserving a couple of cores per node on Summit to run
> system daemons if the use requests. If the user can effectively use the
> GPUs, the CPUs should be idle much of the time anyway. We will see.
>
> I like you idea of a low power core to run OS tasks.
>
> On Sat, Jul 22, 2017 at 6:11 AM, John Hearns via Beowulf <
> beowulf at beowulf.org> wrote:
>
>> Several times in the past I have jokingly asked if there shoudl eb
>> another lower powered CPU core ina system to run OS tasks (hello Intel -
>> are you listening?)
>> Also int he past there was advice to get best possible throughpur on AMD
>> Bulldozer CPUs to run only on every second core (as they share FPUs).
>> When I managed a large NUMA system we used cpusets, and the OS ran in a
>> smal l'boot cpuset' which was physically near the OS disks and IO cards.
>>
>> I had a thought about hyperthreading though. A few months ago we did a
>> quick study with Blener rendering, and got 30% more througput with HT
>> switched on. Also someone who I am workign with now would liek to assess
>> the effect on their codes of HT on/HT off.
>> I kow that HT has nromally not had any advantages with HPC type codes -
>> as the core should be 100% flat out.
>>
>> I am thinking though - what woud be the effect of enabling HT, and usign
>> a cgroup to constrain user codes to run on all the odd-numbered CPU cores,
>> with the OS tasks on the even numbered ones?
>> I would hope this would be at least performance neutral? Your thoughts
>> please! Also thoughts on candidate benchmark programs to test this idea.
>>
>>
>> John Hearns........
>> ....... John Hearns
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
--
Evan Burness
Director, HPC Solutions
Cycle Computing
evan.burness at cyclecomputing.com
(919) 724-9338
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20170725/adfa7506/attachment.html>
More information about the Beowulf
mailing list