[Beowulf] Opinions of Hyper-threading?

Geoff Galitz geoff at galitz.org
Mon Feb 25 11:40:08 PST 2008

As a matter of habit, I usually disable hyper-threading whenever I run
across it.  However... due disclosure;  Jon works in my old facility and
may very well be referring to something I intalled many moons ago.

There are lots of reasons to disable HT, but one of the biggies is
resource contention.  Assuming HT is enabled to run multiple jobs (rather
than restricting the system to only run multiple peers or threads of the
same job), those jobs will likely be contending for disk and network I/O.

Really, this has been said in this thread already, but some clusters are
more network reliant than others and bears repeating. I always try to
configure scratch space locally on the node, but I often stored permanent
data one hop away via NFS.  NFS is not the most elegant of protocols in
applicationl. And consider that node with its NFS traffic may also be
running MPI jobs.

Another thing to consider is that while HT certainly still exists in the
wild, no new systems implement it, hence limited testing.  It is quite
possible that a regression introduced into the Linux kernel at a later
time could break HT support upon installation of that new kernel.

Just my two cents.


> Let's suppose you've inherited 3Ghz dual Xeon nodes and that the
> power costs get paid anyway.
> Then the choice then is between:
> without hyperthreading you've got
>     2 cores @ 3Ghz
> with hyperthreading you've got if you're lucky:
>    2 cores @ 3Ghz which can split itself to
>    4 cores @ 1.6Ghz
> If you'd run 2 processes at each node, then there is 4 cores {A.1,A.
> 2,B.1,B.2}
> So from scheduling points seen there is a number of possibilities.
> We can compress those possibilities.
> {A.1,A.2}     2 x 1.6ghz
> {A.1,B.1}    2  x 3Ghz
> {A.1,B.2}   2 x  3Ghz
> So odds is roughly 33% that you end up getting dicked as your total
> throughput is
> in 33% of the cases 3.2Ghz instead of 6.0Ghz
> Seymour Crays principle comes to mind.
> Now there seems to exist software on planet earth that just needs a
> lot of throughput.
> Like the LL/LLR type software, provided that the FFT size isn't too big.
> You schedule 4 processes and it wins 5% in throughput compared to 2
> Xeons.
> Not the predicted 20% nor 30%, but 5%.
> Heep Heep Huray, Seymour Crays principle refuted.
> So for software that just needs throughput and where you run that
> might be faster
> under specific circumstance.
> That's however very risky.
> Therefore most likely, you want to turn off hyperthreading in hardware.
> Vincent
> p.s. it's nice if someone else pays your power bill, isn't it?
> On Feb 13, 2008, at 6:58 PM, Jon Forrest wrote:
>> I inherited a cluster containing a bunch
>> of Xeon-based compute nodes. The compute
>> nodes were configured with hyper-threading
>> turned on. I'm wondering what you HPC cluster
>> people think of hyper-threading. I haven't
>> heard much about it recently since most
>> modern processors are true multi-core.
>> The main thing I'd like to know is whether
>> hyper-threading can do any harm when cpu
>> bound jobs are run.
>> Cordially,
>> --
>> Jon Forrest
>> Research Computing Support
>> College of Chemistry
>> 173 Tan Hall
>> University of California Berkeley
>> Berkeley, CA
>> 94720-1460
>> 510-643-1032
>> jlforrest at berkeley.edu
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf

Geoff Galitz
Blankenheim, DE

More information about the Beowulf mailing list