[Beowulf] Writing MPICH2 programs

David Mathog mathog at mendel.bio.caltech.edu
Mon Jan 24 14:04:21 PST 2005


>
>A nice demo for a cluster is the parallel version of a raytracer. Google
>for "mpi povray". With the graphics version you can see the blocks which
>the slaves return, which is quite impressive.

Even more impressive (assuming 20 nodes) run 20 jobs sequentially
through the MPI version and then 20 single jobs, one per node
(using SGE or MOSIX, for instance) on the compute nodes in parallel.
Last time I tried that with POVray the total time to complete
the 20 single jobs in parallel was something like
30% less than that for the 20 parallel jobs in order.  Note that
it was important to render to local storage on the compute nodes
(/tmp, so it never actually hit disk there) and then copy the results
back to the final NFS directory.  That moves data in large chunks and
since the jobs tend not to finish all at the same time it does a 
pretty fair job of keeping the network running efficiently.  In another
test where each node wrote results on the fly back to the common
NFS directory performance wasn't nearly so good. The network went
nuts trying to handle all of the smallish packets. (Only 100BaseT, 
maybe less of a problem on Myrinet or 1000BaseT.)

Regards,

David Mathog
mathog at caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech



More information about the Beowulf mailing list