[Beowulf] evaluating FLOPS capacity of our cluster
Gus Correa
gus at ldeo.columbia.edu
Mon May 11 10:23:54 PDT 2009
Rahul Nabar wrote:
> I was recently asked to report the FLOPS capacity of our home-built
> computing cluster. Never did that before. Some googling revealed that
> LINPACK is one such benchmark. Any other options / suggestions?
>
> I am not interested in a very precise value just a general ballpark to
> generate what-if scenarios for future additions and phase-outs. This
> is mostly commodity Dell hardware. Can I get approximate FLOPS numbers
> (neglecting network etc.) off the shelf anywhere?
>
Hi Rahul
HPL is the current version of the Linpack benchmark that you may want
(the Top500 one):
http://www.netlib.org/benchmark/hpl/
You may also want to link HPL to the Goto BLAS:
http://www.tacc.utexas.edu/resources/software/
Another possibility is the NAS benchmark:
http://www.tacc.utexas.edu/resources/software/
Theoretical maximum Gflops (Rpeak in Top500 parlance), for instance,
on cluster with AMD quad-core 2.3GHz processor
is:
2.3 GHz x
4 floating point operations/cycle x
4 cores/CPU socket x
number of CPU sockets per node x
number of nodes.
I don't have Intel processors.
For Intel processors it should be similar,
but you need to check on the Intel site
how many floating point operations per cycle,
and adjust for your number of cores, frequency, etc.
If you don't feel like running the HPL benchmark (It is fun,
but time consuming) to get your actual Gigaflops
(Rmax in Top500 jargon),
you can look up the Top500 list the Rmax/Rpeak ratio for clusters
with hardware similar to yours.
You can then apply this factor to your Rpeak calculated as above,
to get a reasonable guess for your Rmax.
This may be good enough for the purpose you mentioned.
I hope this helps.
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
http://www.top500.org/list/2008/11/100
More information about the Beowulf
mailing list