Version of gmake for clusters...

Josip Loncaric josip at icase.edu
Wed Apr 11 12:59:15 PDT 2001


Greg Lindahl wrote:
> 
> On Mon, Apr 09, 2001 at 02:14:34PM +0100, J.W.Armstrong wrote:
> 
> >   Can anyone point me in the direction of a make program which will
> > work with slave nodes on a scyld cluster? I.E. compile different
> > .c/.f files from a Makefile on different nodes of a cluster.
> 
> Are you sure that your compiles are cpu bound? I've seen numerous
> people spend time getting a parallel make going, only to discover that
> it slows things down. There are some cases (C++ comes to mind) that
> are CPU bound, but then your fastest make (and smallest object code)
> is often gotten from catting all the source files together and doing a
> single compile.

I'm not sure about pmake, but using 'make -j 2 bzImage modules' on an
SMP Linux machine with lots of RAM cuts kernel compile times virtually
in half.  Using 'make -j 4' helps even more because it can hide some I/O
delays (this gives just about the best result).  However, on a dual CPU
Linux box, the number of make jobs should be limited to 4 or so, because
'make -j' does not do as well.  

In general, gcc under Linux is very fast at compiling large codes.  Our
Sun compilers are much slower, which can be a real pain with large codes
(just try compiling LAPACK and see for yourself).  Some commercial codes
are even larger.  A friend told me his company would do nightly builds
of their code during development, which took up to 8 hours to complete. 
In other words, compiles *can* be CPU bound.

Sincerely,
Josip

-- 
Dr. Josip Loncaric, Research Fellow               mailto:josip at icase.edu
ICASE, Mail Stop 132C           PGP key at http://www.icase.edu./~josip/
NASA Langley Research Center             mailto:j.loncaric at larc.nasa.gov
Hampton, VA 23681-2199, USA    Tel. +1 757 864-2192  Fax +1 757 864-6134




More information about the Beowulf mailing list