[Beowulf] Thought that this might be of interest
diep at xs4all.nl
Tue Nov 7 09:17:29 PST 2006
In short a lot of confusion and that all for nothing.
In those CFD codes probably more RAM is better.
You prefer a 0.8Ghz processor with 1 terabyte of RAM over a
8Ghz Core2 with 4GB ram.
The usage of the word 'node' is total out of context here.
We speak only about a machine with a lot of RAM and in fact i've been using
a few processors from itanium systems where i used a few MB of RAM fitting
in L3 per core (for sieving),
meanwhile one guy running CFD just wanted to use all the RAM just using 1
core for the 'calculations'.
That was a good usage of the hardware!
CPU speed is total irrelevant here. Just give them a terabyte of DDR2 RAM
Therefore Jeffrey, buying new cpu's for this is total irrelevant, you
probably prefer a single core 2.8Ghz A64 with a LOT of DDR2 ram
for this and give all those other cores to people who can use cpu speed.
Note you might want to consider other solutions for CFD codes.
There is so 'ram harddrives' with very fast access times to something that
is seemingly a harddrive but in fact just a big bunch of RAM.
the word 'node' is total irrelevant in this discussion. It just goes about
RAM size and only long after that the latency to the RAM.
----- Original Message -----
From: "Jeffrey B. Layton" <laytonjb at charter.net>
To: "beowulf" <beowulf at beowulf.org>
Sent: Monday, November 06, 2006 7:41 PM
Subject: Re: [Beowulf] Thought that this might be of interest
> Greg Lindahl wrote:
>> On Sun, Nov 05, 2006 at 06:38:25PM -0500, Joe Landman wrote:
>>> Since they wish to do it only for Intel processors, and the world is
>>> decidedly mixed, this has implications on the use of Intel compilers for
>>> lots of people wishing to get the best performance on all platforms with
>>> a single compiler tool. Doesn't it.
>> Some codes are also outliers, for example Fluent on Woodcrest does
>> great, if I remember correctly.
> The Fluent benchmarks need to be explained. There are basically
> 9 benchmarks (small, medium, and large models). On the small
> and medium models the Woodcrest kicks butt on a single node
> (1-4 cores) mostly due to the cache size. On some of the large
> models, the Woodcrest does very well on a single node (1-4 CPUs).
> But on some of the large models, the Opteron does very well.
> The small and medium models are really pretty small so when
> I look at more than 1 node, I look at the large models. At a
> certain point, the slower Opterons become faster than Woodcrest
> (both IB).
> I also know one CFD app that is faster on Opteron even for one
> node (1-4 CPUs).
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
More information about the Beowulf