Anyone have information on latest LSU beowulf?
Daniel Kidger
Daniel.Kidger at quadrics.com
Wed Oct 9 09:53:06 PDT 2002
-----Original Message-----
From: Craig Tierney [mailto:ctierney at hpti.com]
Sent: 09 October 2002 17:07
To: Patrick Geoffray
Cc: Daniel Kidger; 'Rocky McGaugh'; Beowulf mailinglist
Subject: Re: Anyone have information on latest LSU beowulf?
> Hi Craig,
>
> On Tue, 2002-10-08 at 12:54, Craig Tierney wrote:
> > > What value of NB did they settle on ? (80 and 160 seem common
choices)
> > > any other non-default values in HPL.dat ?
> >
> > Why are 80 and 160 common choices? I do know that they used 160
> > for their run. I also retested my setup at 160 and it is much
> > slower than 64. I was told by someone at UTK that the size of
> > NB should be a multiple of the L1 cache and that double is good.
> > So NB = sqrt(8kb * 1024/8)=32 for P4 Xeon. I tried 64 and that has
> > been the best for a single node run.
>
> The block size (NB) should be a multiple of the optimal block size found
> by ATLAS. Look for this value in the DGEMM results in SUMMARY.LOG. This
> value is usually 40. Any multiple of this ATLAS block size is fine.
> If NB is small, you will have a lot of communications but good load
> balancing. If NB is large, you have less coms but the grain is coarser.
> 160 (4*40) is a good trade-off for Myrinet cluster.
So if in ATLAS's SUMMARY.LOG, if I get
<cut>
The best matmul kernel was ATL_dmm_sse2_80.c, written by Peter
Soendergaard
This gave performance of 3623.65MFLOPS (200.9227777752460f apparent
peak)
mmNN : ma=0, lat=3, nb=48, mu=4, nu=1 ku=48, ff=1, if=5, nf=1
<cut>
does this mean that I have a 48x48 or an 80x80 dgemm kernel ?
(For this node using xhpl+atlas, 80 and 160 give better performance than 48
or 96)
Yours,
Daniel.
--------------------------------------------------------------
Dr. Dan Kidger, Quadrics Ltd. daniel.kidger at quadrics.com
One Bridewell St., Bristol, BS1 2AA, UK 0117 915 5505
----------------------- www.quadrics.com --------------------
More information about the Beowulf
mailing list