[Beowulf] dual core intel/amd

Jim Lux james.p.lux at jpl.nasa.gov
Thu Apr 28 07:23:46 PDT 2005

----- Original Message -----
From: "Vincent Diepeveen" <diep at xs4all.nl>
To: <beowulf at beowulf.org>
Sent: Thursday, April 28, 2005 6:16 AM
Subject: [Beowulf] dual core intel/amd

> hi,
> Several programs tested under which Diep at sudhian.com
> For diep :  http://www.sudhian.com/showdocs.cfm?aid=667&pid=2543
> If you ask me intel is in serious troubles with respect to beowulfs.

I don't know that Intel seriously cares about tailoring to cluster
computing.  It's a tiny, tiny fraction of their overall sales.  There may
be, what, as many as 100,000 top of the line  Intel processors sold for
cluster computing.

> The dual core opteron is just outperforming the dual core P4 so much.
> Partly because intel has weakened the weak L1 and L2 caches even more.
> <snip>

The vast majority of Intel sales are probably for desktop and commercial
server type applications. I suspect that Intel carefully looks a the typical
instruction mix generated by, e.g. WinXP or Longhorn or NT2003, or, even,
Linux, in a web server/file server/SQL backend/XML-XSLT processor kind of
environment, and allocates on-chip resources accordingly.

Raw, pedal to the metal computation is a tiny part of what most CPUs are
sold for.  As I type this on my old 300 MHz notebook, running Win2000, I
notice that the CPU usage is never going above 5%. So, even for a
notoriously inefficient environment (windows, outlook, graphics display,
etc.), the demands on the CPU are negligble.  The same is probably true of
95% of computers sold.

Even in "production servers", the rate-determining aspect is not usually CPU
speed, but ancillary stuff like networking, disk i/o, etc.  Think about any
business transaction that might be performed today, and the types of CPU
resources needed to execute it. Since business applications are what drive
the market, this is relevant.  There's some sort of inquiry, then a
transaction gets created, and processed through a series of stages.  At each
step, there are numerous database lookups, multiple messages getting passed
up and down the tiers of processors (user interface on one PC: Translator
with middle ware on another PC: Business rules engine on a third layer:
Database server on the fourth layer back end).  The whole trend towards
"total virtualization/abstractization" of the transaction places heavy
demands on systems that translate one representation of an event (or part of
a transaction) into another.

In any case, Intel designs for this kind of thing, not cluster designers.
And that's always been the hallmark of a Beowulf (viz a real supercomputer).
The Beowulf idea is to take "non-optimal" hardware that's commodity and
cheap and use it for supercomputing, overcoming the lack of special features
and performance in a brute force, "throw cheap hardware at it" kind of way.

> Vincent
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list