xdlutime results

Horatio B. Bogbindero wyy at cersa.admu.edu.ph
Wed Sep 13 19:53:44 PDT 2000


pardon the question but what value should i set it to??? how do i know
what is the appropriate value for this. thanks.

On Thu, 14 Sep 2000, Yoon Jae Ho wrote:

> In the lutimer.f file
> 
>       PROGRAM DLUTIMER
> *
> *     Simple Timing Program for ScaLAPACK routine for LU factiorization
> *     and solver
> *
> *     The program must be driven by a short data file. The name of
> *     the data file is 'LU.dat'. An annotated example of a data
> *     file can be obtained by deleting the first 6 characters from
> *     the following 5 lines: (The number in the first line is the
> *     number of different problems the user wants to test. If it
> *     is 'n', the user should input exactly 'n' numbers in each line
> *     after. Also, this program will use the first column of N, NB,
> *     P, Q to do the first test, second column of N, NB, P, Q to do
> *     the second test, etc.)
> *     2                 number of problems sizes
> *     500 600           values of N (N x N square matrix)
> *     64 64             values of NB
> *     2 2               values of P
> *     2 2               values of Q
> *
> *     .. Parameters ..
>       INTEGER            CSRC, DBLESZ, INTGSZ, TOTMEM, MEMSIZ, NBRHS,
>      $                   NOUT, NRHS, NTESTS, RSRC, CSRC_, DLEN_, LLD_,
>      $                   M_, MB_, N_, NB_, RSRC_
>       PARAMETER          ( CSRC = 0, DBLESZ = 8, INTGSZ = 4,
>      $                   TOTMEM = 40000000, MEMSIZ = TOTMEM / DBLESZ,
> 
> 
> please change the above TOTMEM = 40000000 to big figure.
> 
> have a nice day 
> 
> from Yoon Jae Ho
> Seoul,Korea
> 
> ----- Original Message ----- 
> From: Camm Maguire <camm at enhanced.com>
> To: Fredrik Augustsson <hamlet at cs.umu.se>
> Cc: <pgeoffra at cs.utk.edu>; <beowulf at beowulf.org>
> Sent: Wednesday, September 13, 2000 11:59 PM
> Subject: Re: xdlutime results
> 
> 
> > Greetings!  the scalapack testers have statically allocated arrays.
> > TOTMEM is a paramater, and set to a small value by default, if memory
> > serves.  Just edit and recompile.  The Debian scalapack-test package
> > has this pre-adjusted to something more reasonable.
> > 
> > Take care,
> > 
> > Fredrik Augustsson <hamlet at cs.umu.se> writes:
> > 
> > > H!
> > > I've tried to run the xdlutime to get an estimate for how well our
> > > cluster performs, but I can only test small matrices. I have 8 dual PIII
> > > with 512 MB of memory so would at least be abel to run tests on
> > > 20000x20000, or am I wrong here?
> > > 
> > > Output ...
> > > 
> > > dumburk [~/pfs]$ mpirun -np 8 -npn 1 xdlutime
> > > 
> > > Simple Timer for ScaLAPACK routine PDGESV
> > > Number of processors used:   8
> > > 
> > > TIME     N  NB   P   Q  LU Time   Sol Time  MFLOP/S Residual  CHECK
> > > ---- ----- --- --- --- --------- --------- -------- -------- -------
> > > Unable to perform LU-solve: need TOTMEM of at least  325494408
> > > Bad MEMORY parameters: going to next test case.
> > > 
> > > 
> > > + Fredrik 
> > > 
> > > 
> > > 
> > > 
> > > On Tue, Sep 12, 2000 at 10:56:38PM -0400, Patrick GEOFFRAY wrote:
> > > > Putchong Uthayopas wrote:
> > > > 
> > > > > Below are our results. The machines is 8 Athlon 550MHz and one 1Ghz Athlon.
> > > > > 512 Mb memory each. and Myrinet.
> > > > 
> > > > > WALL 6000 9 1 9 116.80 0.41 1229.04 0.001097 PASSED
> > > > > 
> > > > > WALL 6000 9 9 1 200.79 3.65 704.66 0.001926 PASSED
> > > > 
> > > > Hi,
> > > > 
> > > > it seems that you are far from what you can get from your cluster
> > > > !
> > > > * First, 9 nodes is not a good idea, the (1*9) or (9*1) grids are
> > > > bad. you should use only 8 nodes and try (2*4) and (4*2) grids,
> > > > much better.
> > > > * Then 6000 is not a big matrix. If you have 512 MB per node, you
> > > > should go up to 20000. It will run longuer but you will be closest
> > > > to the peak.
> > > > * Which Blas do you use ? Atlas gives a DGEMM peak at 600 MFlops
> > > > per Athlon, so you can hope about 4 GFlops on your cluster.
> > > > * Which MPI do you use ? if it's MPICH-GM, do you use the
> > > > -gm-can-register-memory flag ?
> > > > 
> > > > I strongly advice you to use the new High Performance Linpack
> > > > (HPL) benchmark form Antoine Petitet
> > > > (http://www.netlib.org/benchmark/index.html), it's faster than the
> > > > old Linpack and much easier to install.
> > > > 
> > > > Hope it can help.
> > > > regards.
> > > > 
> > > > Patrick Geoffray
> > > > ---
> > > > Aerospatiale Matra - Sycomore
> > > > Universite Lyon I - RESAM
> > > > http://lhpca.univ-lyon1.fr
> > > > 
> > > > _______________________________________________
> > > > Beowulf mailing list
> > > > Beowulf at beowulf.org
> > > > http://www.beowulf.org/mailman/listinfo/beowulf
> > > 
> > > _______________________________________________
> > > Beowulf mailing list
> > > Beowulf at beowulf.org
> > > http://www.beowulf.org/mailman/listinfo/beowulf
> > > 
> > > 
> > 
> > -- 
> > Camm Maguire      camm at enhanced.com
> > ==========================================================================
> > "The earth is but one country, and mankind its citizens."  --  Baha'u'llah
> > 
> > _______________________________________________
> > Beowulf mailing list
> > Beowulf at beowulf.org
> > http://www.beowulf.org/mailman/listinfo/beowulf
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
> 

 
---------------------
william.s.yu at ieee.org
 
I bought some used paint. It was in the shape of a house.
		-- Steven Wright
 





More information about the Beowulf mailing list