[Beowulf] Optimal BIOS settings for Tyan K8SRE

stephen mulcahy smulcahy at aplpi.com
Mon Sep 4 05:51:11 PDT 2006


Hi Mark,

Thanks for your mail.

See my comments below.

Mark Hahn wrote:
>> 270s) which are being used primarily for Oceanographic modelling (MPICH2
>> running on Debian/Linux 2.6 kernel).
> 
> on gigabit?

Yes, on gigabit (is this an uhu moment? :) Someone has suggested I
should be looking at OpenMPI in preference to MPICH2. We did some
initial testing with a beta of LAM but it was too buggy to be usable (we
hit the NFS bug in 7.1.1 and the test suite had some failures in
7.1.2beta) - is there a significant performance difference between the 2?

> 
>> I had to make some tweaks to make all 4GB of RAM visible to the OS.
> 
> how much was missing, and was it just graphics aperture-related?

We were missing about 1GB as far as I remember so it was more the
graphics aperture afaics.

> 
>>     HT-LDT Frequency    Auto
>>     Dual-Core Enable    Enabled
>>     ECC Features
>>         ECC    Enabled
>>         ECC Scrub Redirection    Enabled
>>         Dram ECC Scrub CTL    Disabled
>>         Chip-Kill    Disabled
>>         DCACHE ECC Scrub CTL    Disabled
>>         L2 ECC Scrub CTL    Disabled
> 
> those seem to be normal settings I see on most machines.  the RAS-related
> settings seem to be unnecessary for a "normal" cluster (one where no large
> rate of ECC's happen, and one where a reboot doesn't cause planes to fall
> out of the sky.)
> 
> on the other hand, I'd love to find out whether there is any performance
> impact from enabling scrub, since it does slightly increase memory
> workload.
> then again, if your rate of correctable ECCs is trivial, scrubbing is
> not relevant...

I was wondering if the scrubbing had a performance impact myself. I
guess if there is no performance impact then, since the functionality is
there, I'm inclined to enable as much of it as possible - but if it
costs a few percents of performance then I'm inclined to let a node die
on occasion rather than hobble the whole cluster ... but I'm not clear
on how exactly scrubbing works. Does anyone have any insights? Is
scrubbing something thats only triggered in the event of an error - or
is it something that happens continously in the background, if so, does
it incur a performance penalty?

> 
>>     Memory Hole
>>         4GB Memory Hole Adjust    Manual
>>         4GB Memory Hole Size    768 MB
>>         IOMMU    Enabled
>>         Size    32 MB
>>         Memhole mapping    Hardware
> 
> I don't think there are performance implications here.  you seem to have
> already found the right combination of iommu/memhole settings that give
> you your full roster of ram.  my googling on the topic didn't enlighten
> me much, though people apparently recommend "iommu=memaper=3"

I did some follow-up googling myself and it sounds like
"iommu=memaper=3" is useful if you run out of IOMMU space .. but failing
that theres probably no benefit? Someone has suggested that "software"
Memhole mapping may be "better" but I'm not sure what "better" means yet.

> 
>>     Memory Config
>>         Swizzle Memory Banks    Enabled
> 
> donno - I don't think this appears in the AMD bios-writers guide
> 
>>         DDR clock jitter    Disabled
>>         DDR Data Transfer Rate    Auto
>>         Enable all memory clocks    Populated
>>         Controller config mode    Auto
>>         Timing config mode    Auto
> 
> those are the settings I normally see as well.
> 
>>     AMD PowerNow!    Disabled
>>     Node Memory Interleave    Auto
>>     Dram Bank Interleave    Auto
> 
> for numa-aware OS's (like any modern linux), I think node-memory
> interleave should be disabled.

Thanks, it seems that "node-memory interleave" could cause a performance
hit alright and I'll definitely disable this.

(some numbers here -
http://www.digit-life.com/articles2/cpu/rmma-numa.html).

Thanks again for your response,

-stephen

-- 
Stephen Mulcahy, Applepie Solutions Ltd, Innovation in Business Center,
   GMIT, Dublin Rd, Galway, Ireland.      http://www.aplpi.com



More information about the Beowulf mailing list