[Beowulf] AMD 6100 vs Intel 5600
Mark Hahn
hahn at mcmaster.ca
Fri Apr 2 11:06:28 PDT 2010
>> SPEC self-limits its relevance by refusing to recognize that it should
>> be open-source. being open-hostile means that it has very limited
>> numbers
>> of data points,
>
> Yup, only 9,335 submissions indexed on this page:
> http://www.spec.org/cpu2006/results/cpu2006.html
I think SPEC is worthwhile, just don't understand why the organization
has persisted in self-limiting its accessibility/relevance.
that number is also a bit disingenuous, as it lumps together
int/fp and the rate versions. further, you see many cases
where the results are shown for systems that differ only in
packaging (who would guess that a dell 1950 performs the same
as a 2950 when configured identically!)
actually, the latter is one of SPEC's uses: allowing a vendor
to confirm in a public way that a particular model is not broken.
it's also nice to see a particular model/config with a range
of different CPUs. and it's sometimes possible to compare across
vendors, as well. these are all wonderful, and I value SPEC
for them. what I don't understand is why it helps SPEC or consumers
of SPEC to keep the rest of the world from running the tests.
>> very minimalistic UI (let alone data mining tools),
>
> It is a limited text-based UI -- that runs on Linux, Windows, and
>proprietary Unixes. Portability was/is a major goal of SPEC.
sure, everyone understands portability. but wider availability
of the source (and the vast increase in data that would result)
would make it far more interesting to do real data mining.
> The search form:
> http://www.spec.org/cgi-bin/osgresults?conf=cpu2006&op=form
> is useful for data mining.
I use it, but let's not kid ourselves - it's not data mining by any
meaningful definition. and I definitely mean no slight to whoever
put together the interface - it's nice given its mandate.
>> and perhaps most importantly, slow adaptation to changes in how
>> machines
>> are used (memory footprint, etc).
>
> True. But SPEC MPI2007 v2.0 is a 2.4 GB package of software [larger
> datasets, and a 128 GB minimum RAM / 64 cores (min) requirement to run the
> Large suite; the Medium suite (16 GB, ~8 core minimum) is still part of
> 2.0].
I meant SPECCPU, of course. actually, I'd like to ask you how to think
about SPECMPI results. I spent some time staring at them just now, and
am not sure how to draw conclusions.
for instance, with SPECCPU, one of the first things you have to do is
trim the results:
http://www.spec.org/cpu2006/results/res2008q2/cpu2006-20080328-03888.html
that cactusADM result is not informative.
also, of the 187 SPECMPI results, there are only a handful of vendors:
given that there are hundreds of CPU centers that would love to be able to
profile their clusters wrt other clusters, don't you think that being
open-source would fundamentally change the value of the benchmark?
> Like HPC Challenge, and NAS Parallel, it does not provide a single number
> as a metric of performance. There are always compromises and knashing of
> teeth in coming up with a formula for that single number, but
> SPEC and the Linpack/Top500 maintainers have found that people like it.
I guess it's a question of what your goals are. the single scalar result
is good for the marketing folk (and I would claim that SPEC is pretty
driven by them). for customers, I don't think the single scalar is much
used or wanted, since it's too hard to tell what it means. with top500,
it's perfectly clear what the number means: raw incache flops with a minor
adjustment for interconnect. yes, it's entirely possible to extract the
raw component-level data from SPEC and produce your own (trimmed, perhaps
more discipline-focused) metric. but how many people do it? I did it
for our last major hardware refresh ~5 years ago, but if it was possible
to have community involvement (variety, eyes, fertilization) in SPEC,
the benchmarks could be an entirely different kind of garden kind of garden.
More information about the Beowulf
mailing list