[Beowulf] 512 nodes Myrinet cluster Challanges

Kevin Ball kball at pathscale.com
Wed May 3 14:58:43 PDT 2006


Hi Igor,

On Wed, 2006-05-03 at 12:19, Kozin, I (Igor) wrote:
> Hello Kevin,
> interesting that you said that.
> We are in the process of developing a database for 
> application benchmark results because we produce quite a bit of data
> and only a small fraction goes into our reports.
> The database is for our internal use to begin with
> but we also plan to make it available online
> as soon as we are happy with it.

This would be very useful.

> 
> At the moment we run all the benchmarks ourselves.
> But then we are obviously limited in the amount of
> work we can do and the number of benchmarks we can cover.
> We also contemplate what if we start accepting results 
> from vendors (or from anybody for that matter).
> I am talking here about well established scientific codes
> (as we are serving the UK academia) and well defined
> test cases. (We would not mind ISV codes too but there
> are usually licensing issues.)

  It certainly broadens your scope if you allow vendors to submit
results.  One would need to be careful to fairly strictly define what 
must be the same (benchmark input, code configuration options) and what
can be different (cpu, compilers, interconnect, etc...)

  As far as ISV codes are concerned... they often have their own
individual websites where benchmarks are published (Fluent publishes on
their site, MPP-dyna results are published at TopCrunch) so it might be
sufficient (and would definitely be useful) to simply have pointers to
all of those results.

> Is there indeed sufficient interest in such a resource?

  This is difficult to say.  PathScale (well, now QLogic, since they
acquired us) is interested enough in application performance that we
have gone out of our way to measure performance on a large number of
applications... but very often what we have reported on has been limited
more by a dearth of comparative results than by a lack of applications
to run.  It is all well and good to report scaling or performance
results on an application, but if no one else has ever done so, those
results don't mean much.  Scaling well on an embarrassingly parallel
code is of little interest.

  Given the low levels of application performance publication by some of
the other compiler and interconnect vendors out there, I don't know if
there is interest on their parts.  I think system vendors tend to be
more interested.
> 
> Should the data entry be retained to us (say, based on the
> output file and the technical description provided)
> or left to be entered through the web interface?
> Somehow I feel more inclined towards the former
> although it would mean more work for us.
> But then again it all depends if anybody will bother at all...

The former certainly adds a level of control and verification.  Other
benchmarking institutions have run the whole gambut, from SPEC (multiple
week review periods) to HPCC (you submit it and it appears).

-Kevin
> 
> 
> I. Kozin  (i.kozin at dl.ac.uk)
> CCLRC Daresbury Laboratory
> tel: 01925 603308
> http://www.cse.clrc.ac.uk/disco
> 
> > 
> >   It would be wonderful if someone (preferably 
> > CPU/Interconnect neutral)
> > would sign up to set up and maintain a similar page for application
> > benchmark results.  I personally have spent many hours trying to scour
> > the web for such results, trying to get a feel for how PathScale
> > products (both our interconnect and our compilers) are doing 
> > relative to
> > the competition.  This type of information is not only useful for
> > vendors, but would be incredibly useful for customers.
> > 
> >   In the absence of such a central repository, we post many 
> > application
> > scaling and performance results on our website in the form of a white
> > paper.  We would be very happy if other vendors did the same, 
> > but better
> > still would be an independent body had a place for such results.
> > 
> 




More information about the Beowulf mailing list