[Beowulf] I'm so glad I didn't buy one of these
Olli-Pekka Lehto
olli-pekka.lehto at csc.fi
Thu Jul 3 00:41:07 PDT 2014
On 03 Jul 2014, at 09:48, John Hearns <hearnsj at googlemail.com> wrote:
> On 03/07/2014, Joseph Landman <landman at scalableinformatics.com> wrote:
>>
>>
>>
>> Now just get all those writing rfps to stop with the madness. The number
>> and it's pursuit are one of the more significant causes of entropy (as in
>> waste heat generation ) as I have ever seen. A dubious metric at best with
>> extremely limited, if any, correlation to end user realizable performance.
>
>
> Absolutely agree. Application level benchmarks for the codes the
> system is going to run.
> And also put that in as a simple acceptance test - does the cluster
> run code X on N processors in the time T you said it would? Wham bam
> thankyou.
>
In practice using full application benchmarks may be challenging: When buying
cutting-edge gear, the vendors usually have rely on extrapolating the performance
of the application for future hardware.
The more compilcated the application, the more work it is to extrapolate and
there is more uncertainity. This in turn will (at least in the current economic climate)
put most vendors in “risk minimization” mode: They avoid overcommitment and
have very conservative bids or even fall back to offering “safe" current-generation
hardware *yawn*.
Thus, capturing a minimal set of relevant critical performance metrics in synthetic
benchmarks and simple kernels would be ideal. The downside is that this may not
always be a simple task either.
>
> What are the audiences thoughts on the HPCC benchmarks?
>
They provide a nice toolkit for the kind of evaluation I mentioned above. For some things
(like I/O) complementary benchmarks should be used though.
Having a single number benchmark like HPL is nice for evaluating trends though and
I’m pretty sure also that the need for having a some sort of top ranking will not
go away soon. There are a couple of interesting efforts to do this in a more balanced way
that reflects more the “average" application performance:
- HPGMG (https://hpgmg.org/)
- HPCG (https://software.sandia.gov/hpcg/)
Here are some interesting new results of how HPCG ranking changes the top system
ranking in HPL:
https://twitter.com/a_z_e_t/status/482062498477383680/photo/1
O-P
--
Olli-Pekka Lehto
Development Manager, Computing Platforms
CSC - IT Center for Science Ltd.
E-Mail: olli-pekka.lehto at csc.fi // Tel: +358 50 381 8604 // skype: oplehto // twitter: @ople
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20140703/b4724442/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4512 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20140703/b4724442/attachment.bin>
More information about the Beowulf
mailing list