[Beowulf] evaluating FLOPS capacity of our cluster

Gus Correa gus at ldeo.columbia.edu
Mon May 11 11:06:47 PDT 2009


Rahul Nabar wrote:
> On Mon, May 11, 2009 at 12:23 PM, Gus Correa <gus at ldeo.columbia.edu> wrote:
>> Theoretical maximum Gflops (Rpeak in Top500 parlance), for instance,
>> on cluster with AMD quad-core 2.3GHz processor
>> is:
>>
>> 2.3 GHz x
>> 4 floating point operations/cycle x
>> 4 cores/CPU socket x
>> number of CPU sockets per node x
>> number of nodes.
> 
> Excellent. Thanks Gus. That sort of estimate is exactly what I needed.
> I do have AMD Athelons.
> 
> In fact, this is super usefule for some of our oldest legacy hardware
> too. We used to just use Dell Desktops clustered together. I have
> easily accessible all the other info. that goes into your equation.
> Except the floating point operations / cycle numbers.
> 
> Let me dig those out.
> 
> Thanks!
> 

Hi Rahul

I am glad that it helped.

However, note that Rpeak doesn't consider any network latency, I/O,
cache misses, memory latency, etc, etc.
It is just based on a crazy assumption
that all processors are steering at full speed,
doing only floating point operations no stop,
working together in perfect sync,
and communicating instantly with each other.

I would suggest applying a reasonable
Rmax/Rpeak ratio to the Rpeak number(s) you may get for your cluster(s), 
so as not to overestimate performance too much.

Typical Rmax/Rpeak ratios in Top500 are around the 80% ballpark.
The very first on the list, Roadrunner, is ~76%, IIRR.
You may want to check the Top500 list for further information,
or to match Rmax/Rpeak to your hardware (e.g. GigE vs. Infinband):

http://www.top500.org/list/2008/11/100

Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------



More information about the Beowulf mailing list