parallel programming paradigm

Horatio B. Bogbindero wyy at cersa.admu.edu.ph
Thu Sep 21 21:07:10 PDT 2000


can i have some clarifications made. obviously i am confused since i am a
systems/network administrator and not a real computer science guy. hehehe.
btw, i am a physics major and would love to try out some neat physics
projects with the cluster. hehehe.

what do you mean when you say data parallel?

i was assuming that matrix-matrix operations are implemented in
master-slave fashion. i was thinking more in the line of:

-master gets request and matrixes
-master chops it into ity-bity parts
-master broadcasts it to the other nodes
-other nodes computer
-master gathers the ity-bity parts
-master puts it together
-master returns it to user

my understanding is that this is data parallel in a sense that the data
was chopped into ity-bity datasets and then farmed out. is simplistic
understanding of the matter correct?

On Thu, 21 Sep 2000, Tony Skjellum wrote:

> See below.
> -Tony
> 
> Anthony Skjellum, PhD, President (tony at mpi-softtech.com) 
> MPI Software Technology, Inc., Ste. 33, 101 S. Lafayette, Starkville, MS 39759
> +1-(662)320-4300 x15; FAX: +1-(662)320-4301; http://www.mpi-softtech.com
> "Best-of-breed Software for Beowulf and Easy-to-Own Commercial Clusters."
> 
> On Fri, 22 Sep 2000, Horatio B. Bogbindero wrote:
> 
> > 
> > i would just like to ask some question that have been bugging my mind:
> > 
> > -is the master-slave model for parallel programming the most frequently
> > used model?
> It is hard to say, but these classes clearly exist
> pure data parallel (like SCALAPACK, LINPACK, many parallel algorithms)
> master slave model where slaves do DATA PARALLEL computations
> master slave model where slaves do independent task parallel (or even
>    work in smaller groups on distinct tasks; each subgroup could be
>    data parallel for instance)
> pure task parallel - lots of different stuff going on with lots of 
>    heterogeneous communication
> 
> > -is SCALAPACK implemented as master-slave?
> No, it is data parallel
> > -is PETSc implemented as master-slave?
> Most of the linear algebra in PETSc would be data parallel
> > -does benchmark tests like LINPACK and HPL test for node-node or pairwise
> > bandwidth on all the nodes?
> LINPACK (and related HPL) test two things:
>   reductions of single real numbers
>   broadcasts of submatrices (single rows/columns and block row/columns)
>   They do not explicitly do node-to-node or pairwise, nor overlapping
>   of communication and computation
> 
> > 
> > -was is the most commonly used non master-slave model being use by you?
> Data parallel is what most parallel libraries and many parallel programs
> use: matrix-vector, matrix-matrix, LU, QR, data decomposition type
> algorithms...
>  > 
> > thanks for your time. i am exploring the possibilities for different types
> > of parallel programming paradigms.
> > 
> >  
> > ---------------------
> > william.s.yu at ieee.org
> >  
> > "... all the modern inconveniences ..."
> > 		-- Mark Twain
> >  
> > 
> > 
> > _______________________________________________
> > Beowulf mailing list
> > Beowulf at beowulf.org
> > http://www.beowulf.org/mailman/listinfo/beowulf
> > 
> 

 
---------------------
william.s.yu at ieee.org
 
"... all the modern inconveniences ..."
		-- Mark Twain
 





More information about the Beowulf mailing list