[Beowulf] evaluating FLOPS capacity of our cluster
Gus Correa
gus at ldeo.columbia.edu
Mon May 11 14:58:41 PDT 2009
Rahul Nabar wrote:
> On Mon, May 11, 2009 at 2:09 PM, Gus Correa <gus at ldeo.columbia.edu> wrote:
>> Of course even the HPL Rmax is
>> not likely to be reached by a real application,
>> with I/O, etc, etc.
>> Rahul and I may be better off testing Rmax with our real
>> programs.
>>
>
> I do know that these benchmarks can be somewhat unrealistic and the
> real test is the actual application that you want to run. I already
> have those timed-benchmarks for my particular computational chemistry
> code and more specifically with a job representative of what we might
> consider "typical" for computations on our cluster.
>
Hi Rahul, list
For reliable estimates of the computational effort required by research
projects, project duration, etc, there is no real substitute than what
you did: timing your applications on typical runs.
> Yet, while speaking to larger audiences sometimes FLOPS becomes a
> commonly reported and understood benchmark and hence my desire to
> compute it.
>
> That is another reason why the exact-FLOPS capacity is of not so much
> interest to me as a approximate value.
>
You are very right.
The larger audiences include
grant proposal agencies (NSF, NOAA, NIH,etc),
potential donors,
upper administration officers in the academia, etc.
Even nominal Gflops are OK for a grant proposal,
as long as you tell they are nominal numbers.
Donors may be happy to read that your cluster, which
was partially funded by their XYZ-Foundation, broke the
Top500 HPL benchmark Teraflop barrier.
Directors and Deans may approach donors telling with plans
to expand the cluster to a (nominal) capacity of 3 Teraflops, and so on.
As long as *you* don't get carried along by those numbers! :)
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
More information about the Beowulf
mailing list