[Beowulf] evaluating FLOPS capacity of our cluster
Gus Correa
gus at ldeo.columbia.edu
Mon May 11 12:09:17 PDT 2009
Mark Hahn wrote:
>>> Excellent. Thanks Gus. That sort of estimate is exactly what I needed.
>>> I do have AMD Athelons.
>
> right - for PHB's, peak theoretical throughput is a reasonable approach,
> especially since it doesn't require any real work on your part. the only
> real magic is to find the flops-per-cycle multiplier for your cpus.
> basically, anything introduced since core2 has been 4 f/c (incl core2).
> before that, only ia64 was 4 f/c. as others have mentioned, the ">= core2
> generation" includes AMD barcelona/shanghai/etc versions (server and
> desktop), as well as nehalem on the intel side.
>
>> Typical Rmax/Rpeak ratios in Top500 are around the 80% ballpark.
>
> 80 is fairly high, and generally requires a high-bw, low-lat net.
> gigabit, for instance, is normally noticably lower, often not much
> better than 50%. but yes, top500 linpack is basically just
> interconnect factor * peak, and so unlike real programs...
Hi Mark, list
I haven't checked the Top500 list in detail,
but I think you are right about 80% being fairly high.
(For big clusters perhaps?).
In the original email I mentioned that Roadrunner (top500 1st),
has Rmax/Rpeak ~= 76%.
However, without any particular expertise or too much effort,
I got 83.4% Rmax here. :)
I was happy with that number,
until somebody in the OpenMPI list told me that
"anything below 85%" needs improvement. :(
This HPL test was done on a dual-socket quad-core Shanghai 2376,
w/ Inifinband (III, not ConnectX), a single Mellanox switch, 24 nodes,
HPL, GotoBLAS and OpenMPI 1.3.2 all compiled with Gnu compilers.
I haven't tried to run it on the GigE network yet, to compare,
and I presume performance would be degraded,
but I can't play the HPL game any longer,
I have to do production work.
Is 83.4% a "small is better" effect? :)
Of course even the HPL Rmax is
not likely to be reached by a real application,
with I/O, etc, etc.
Rahul and I may be better off testing Rmax with our real
programs.
Thank you,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
More information about the Beowulf
mailing list