Scyld Hardware Question

Aaron Collier collier at
Sat Aug 31 14:13:31 PDT 2002


(1) Was hyper-threading enabled on the nodes?

NOTE 1: I have read that hyper-threading isn't stable even with the 2.4.18 
kernel (system performance eventually begins to degrade drastically).

(2) Were the nodes using IDE or SCSI storage?

NOTE 1: Intel admits that DMA doesn't function properly withe the Intel 
E7500 chipset under RedHat 7.2 (the release Scyld 28cz-4 is based on), and 
provides the solution of upgrading to a patched 2.4.19 kernel (if I recall 
correctly 28cz-4 only uses a variant of the 2.4.17 kernel).

NOTE 2: Without proper DMA support the effective throughput on the IDE bus 
is an embarrassing 3 MB/sec.

NOTE 3: Since Scyld modifies the kernel then patches derived from changes 
to the vanilla source (from may not work.

NOTE 4: I have been told by Scyld tech support not to bother attempting to 
upgrade the kernel since various patches need to be applied.

(3) Did the cluster nodes use the Intel E7500 chipset?

NOTE 1: Even the vanilla 2.4.18 kernel neglects to implement IRQ balancing 
on the E7500 chipset (only CPU0 is able to handle interrupts).

(4) Did you resolve all of the aforementioned issues in release 28cz-4, or 
is the meaning of the term "supported" still relative?


On Fri, 30 Aug 2002, Donald Becker wrote:

> On Fri, 30 Aug 2002, Aaron Collier wrote:
> > Has anyone installed Scyld Beowulf OS on a dual Pentium 4 Xeon system?
> We test with a few daul P4-Xeon systems here, and  at LinuxWorld we were 
> demonstrating on a 10 node / 20 processor CA-Digital cluster in the
> Intel booth.
> >  If so, what motherboard are you using?  I am asking because although Scyld 
> > has told me that version 28cz-4 supports the P4 Xeon, I have my doubts 
> > after reading all of the problems mentioned in the posted e-mails.  Also, 
> > I have noticed that in the computer industry the meaning of the term 
> > "supported" is relative :-)
> There can always be issues with any system.  
> As long-time list readers know, we have written and supported a great deal
> of cluster software over the years.  We are in a much better position
> than almost any other cluster company, large or small, to fix problems
> as they occur.

More information about the Beowulf mailing list