[Beowulf] New HPCC results, and an MX question
daniel.kidger at quadrics.com
daniel.kidger at quadrics.com
Thu Aug 11 07:10:53 PDT 2005
Folks,
Just to note - it was myself who did the runs posted on the HPCC website that gave Quadrics a figure of 11.4568 usec for the random ring latency.
This figure is bogus.
It was caused imho by a defect in the HPCC source code (now fixed in a later release of HPCC). The HPCC source at the time used MPI_Wtick() to determine how long in wall clock - hence how many iterations to do for this test. Quadrics MPI is unusual in that it uses the hardware clock on the Elan chip to get a very high resolution timings. Hence MPI_Wtick() was much smaller than the benchmark writers expected. This lead to very few (<10 iirc) passes of the timed loop and hence the startup costs not being amortised away.
I will re-run a newer version of this benchmark on QsNetII to get the 'proper' figure and post it if folk are interested (or indeed any of you can ask sales at quadrics.com for access to such a cluster and do your own independent testing.)
Daniel.
> -----Original Message-----
> From: Greg Lindahl [mailto:lindahl at pathscale.com]
> Sent: 20 July 2005 21:59
> To: beowulf at beowulf.org
> Subject: Re: [Beowulf] New HPCC results, and an MX question
>
>
> > >To give you an example, look at the Quadrics reported numbers for
> > >random ring latency of 11.4568 usec and average ping-pong of 1.552
> > >usec. This is on a 2-cpu node (I think). I'd bet that most of this
> > >difference has nothing to do with machine size. But I'd be
> happy to be
> > >proven wrong.
> >
> > I would think 1.5 is shared memory in this case
>
> Patrick,
>
> That's too high, if you look at the "minimum ping pong" of 0.937 usec,
> that is their shared memory number. (The Quadrics guys are a lot
> smarter than 1.5!)
>
> > I prefer benchmarking real codes, and we will publish that,
> but 10G is
> > taking most of my time these days (got to get something for you to
> > compare against).
>
> I'll look forward to it. We've published several application
> benchmarks for you to compare to; a whitepaper is linked at the bottom
> of: http://pathscale.com/infinipath-perf.html
>
> -- greg
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe)
> visit http://www.beowulf.org/mailman/listinfo/beowulf
>
More information about the Beowulf
mailing list