[Beowulf] Broadwell HPL performance
Joe Landman
landman at scalableinformatics.com
Thu Apr 21 07:05:22 PDT 2016
On 04/21/2016 08:56 AM, Douglas Eadline wrote:
>> On 20/04/16 16:52, John Hearns wrote:
>>
[...]
>>
>> Basically I don't trust these numbers. I assume the rest of the
>> data are equally wrong. Being an old school
>> kind of dude a link to the raw output is always nice
>> and some run-time data like HT (on off) and number of threads
>> etc. is helpful.
Not specific for HPL, but in general, we find that people don't release
all the information around their tests, so they aren't quite replicable.
Then there are the tests which are outliers for some reason or another,
which are not repeatable even on the same rig, yet get used as the
"actual" result.
Our operational theory is, if you report something, it ought to be the
same as what the user would measure if they did the test.
>> Others from Boston in the UK:
>>
>> https://www.boston.co.uk/blog/2016/04/06/intel-xeon-e5-2600-v4-codename-broadwell-launch-and-preliminary-bench.aspx
> They report 859 GFLOPS for the E5-2650, again above peak, but they
> seem to state that HT is on. How many threads do they use for the test?
>
> I assumed that enabling HT hurt HPL numbers (at least in my MPI tests it
> does). Is it possible that for these tests, HT helps performance (a bit),
> but in the case of the Dell blog, including the HT "cores" means doubling
> the peak number which would make the result look bad?
>
> Sigh.
HT is a mixed bag for computational workloads.
The bigger issue is as Doug notes, if you don't quite understand
platform details, and what/how you are measuring, your results will be
difficult (at best) to correctly interpret. Its easy to mess up
performance measurements, and peak theoretical numbers. It helps to
show all the inputs into your assumption, so that others can check as
well. Showing raw data is even better, though a summarized data set
(with a detailed description of how you summarized) is also helpful.
For marketing docs, this is rarely done. The engineering white papers
that feed into it? It should be done.
Real benchmarking done right is actually quite hard. Discerning useful
information from these efforts is a challenge.
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
e: landman at scalableinformatics.com
w: http://scalableinformatics.com
t: @scalableinfo
p: +1 734 786 8423 x121
c: +1 734 612 4615
More information about the Beowulf
mailing list