Anyone have information on latest LSU beowulf?
Craig Tierney
ctierney at hpti.com
Thu Oct 10 08:54:00 PDT 2002
On Wed, Oct 09, 2002 at 12:39:03PM -0400, Patrick Geoffray wrote:
> On Wed, 2002-10-09 at 12:07, Craig Tierney wrote:
>
> > This says that NB=40 is good for the PIII which has a larger
> > L1 cache than a P4 (16k data vs. 8k). NB should be a multiple
> > of 32 for the P4. I would like to try it out on a PIII, I would
> > think that 44 is a better value based on cache size. I tried
> > all these tricks on an Alpha was 16k L1 cache and found 88 (44*2)
> > best.
>
> Which value is used by ATLAS ? Stick with it. It may be 40 or 32 or
> whatever, but it will be the granularity of DGEMM used in HPL.
> If 32 is the block size used by ATLAS, try ro tun with NB as 32, 64, 96
> and 128.
It seems that I was wrong about the NB size. I thought I had tested it, but
not on 500 processors. Here are my results from 2 runs (250 dual xeon 2.2Ghz).
These were on different systems.
In my ATLAS summary, NB=48 is being used.
Run 1:
W01R2L6 125000 160 20 25 1329.60 9.793e+02
W01R2L6 125000 80 20 25 1343.96 9.689e+02
W01R2L6 125000 96 20 25 1372.81 9.485e+02
W01R2L6 125000 192 20 25 1412.41 9.219e+02
W01R2L6 125000 64 20 25 1415.00 9.202e+02
W01R2L6 125000 128 20 25 1575.68 8.264e+02
Run 2:
W01R2L6 125000 160 20 25 1345.82 9.675e+02
W01R2L6 125000 80 20 25 1387.60 9.384e+02
W01R2L6 125000 96 20 25 1415.53 9.199e+02
W01R2L6 125000 64 20 25 1422.12 9.156e+02
W01R2L6 125000 192 20 25 1442.42 9.027e+02
W01R2L6 125000 128 20 25 1596.93 8.154e+02
The NB=192 case failed to give the correct result on both runs.
80 and 160 give the best results on this problem. I will continue to test with
these numbers.
Craig
>
> Patrick
> --
> ----------------------------------------------------------
> | Patrick Geoffray, Ph.D. patrick at myri.com
> | Myricom, Inc. http://www.myri.com
> | Cell: 865-389-8852 685 Emory Valley Rd (B)
> | Phone: 626-821-5555 Oak Ridge, TN 37830
> ----------------------------------------------------------
--
Craig Tierney (ctierney at hpti.com)
More information about the Beowulf
mailing list