[Beowulf] A start in Parallel Programming?

Joe Landman landman at scalableinformatics.com
Mon Mar 19 17:02:40 PDT 2007


Robert G. Brown wrote:

>> While using this is a great way to write out what you want to do at a
>> higher (more abstract) level, writing out an LU solver in C (a good one
>> that performs well) is quite a bit harder.  And more prone to error than
>> using the above.
> 
> Unless, of course, you use an excellent numerical library for C like the
> GSL, in which case you have a whole range of things you can do:
> 
>   http://www.gnu.org/software/gsl/manual/html_node/LU-Decomposition.html

Which I agree with, to a degree ....  To every rule .... yadda yadda yadda

The issue (last I looked last year) with GSL was parallelism.  There was
an OpenMP "port" attempt at one time.  No MPI-er-ization (is there a
reasonable word to describe the process of converting to MPI?)

Part of this is the abstraction they use has a number of built in
dependencies, specifically that data is only a pointer away from the
data structure or the method using it.  Yeah, you can do things with
"DSM" like bits, but I am not sure this is the right approach.

That is, you can use the LU solver built into GSL, as long as you accept
its limitations (which was implicit in my point).  Not in-apropos (er
... ah ...)  but disjointed because I pointed out that there were
dangerous features without pointing out that these dangerous features
can also be used for much good.  And if you try to abstract away the
dangerous features, sure you can work at a "higher" level, but you give
something up in the process.

The major difference between the time I posted the original and now is
that coffee, warm in my hand then, had not diffused sufficiently into my
brain to enable anything close to human speech, cogent/coherent
arguments ... so I was thinking of (but forgetting to write down) the
rest of the point.  I must remember NPBC: No Posting Before Coffee.

> and where this is just the first step of many of them.  Your observation
> isn't incorrect, it is just inapropos.  Indeed, if I had to choose to

Well, I didn't take the time to provide a detailed construction.  The
missing bits were simply that the "dangerous" features also render
great power.  Conversely, if you try to avoid these "dangerous"
features, you have to jump through painful hoops to emulate this great
power.  And in the process you lose some
features/functionality/performance.  The "engineering" aspect is
figuring out which one you would prefer to deal with.

> trust something like incomplete gamma function evaluation in octave or
> in the GSL, I'd pick the GSL.  But then, I'm on it's list...

Heh ...

[...]

> Honestly, I suspect that there are two or three things the nay-sayers
> were thinking about when they described C as "unsafe".  First and

Hmmm.... I like C, I hope I am not construed as a nay-sayer ...

[...]

> The latter of the (and half the second) are part of C's great power.

which was my point.

> One of my favorite examples involves ODE solvers.  Most canned ODE
> solvers require that they be passed a VECTOR of ODEs and initial

heh... you mean the ODE solvers written in the 60's?

> conditions to return a VECTOR or new values.  Alas, I tend to solve ODEs
> that are e.g. nearest-neighbor coupled on a spatial lattice in 3 (or
> more!) dimensions.
> 
> There are two ways to solve this problem (for say a cube of side N
> sites).  One is to allocate a single linear vector of length N^3 (to
> make the ODE happy) and write your ODE derives routine with a whole lot
> of Evil pointer arithmetic -- that (i*jmax + j)*kmax + k kind of stuff
> so that you can do such transparent code as:

Evil pointer arithmetic.  Hmmm.... me like!

> 
>  U = - coup*vec[(i*jmax + j)*kmax + k]*vec[((i+1)*jmax + j+1)*kmax +
> k+1 + ...;

Now start trying to explain this to students as to why you are doing
this ...

> 
> which makes me hurt just to look at it but let's you evaluate the
> derivatives and integrate with a straight vector solver.
> 
> The second way is to pack the offsets of the LINEAR vector vec[] into
> a 3d pointer ***site -- one time, with reusable code -- and then
> the code becomes
> 
>  U = - coup*site[i][j][l]*site[i+1][j+1][k+1] + ... ;

[ ... thinking about all of the cache misses ... ouch ...]

Well, you made a part of one of my points.  With better abstraction
comes a cost.  In this case, you have to triply index to a pointer to
dereference.  This makes it *hard* to generate good memory access
patterns, which, although the other code is painful to look at, you can
do with the other code.  That is, assume that you have the i,j,k triple.
 For each memory access, you need to
	
	1) compute [i,j,k] -> offset into ***site  (3 multiplies/2 adds per
memory operation.
	2) use this value to dereference site[ , , ]

There are techniques you can use to speed this up (precaching things,
ordering loops and which indicies you run over, blocking, various
explicit prefetching, reordering the data in ram, ...)

But this belabors the point.  The point being that you chose to use
something which represents an abstraction to simplify the way you work
on it.  And yes, this abstraction, this working "at a higher level" does
in fact give you "more power" ... but it also costs you (as I pointed
out) some performance as you are using the abstraction to hide
complexity, while allowing the abstraction to take its kilogram of flesh.

This may be the time to bring up TANSTAAFL.

Short version:  Abstractions aren't free.  You lose something in using
them.  You gain something in using them.  It makes sense where what you
gain is more valuable than what you lose.




> 
> while you can STILL call the ODE solver with the *vec...
> 
> So, are pointers a force for good or evil?

Metaphysical programming ... :^

> Neither.  They are a programming device, a tool, a way of directly

Yup.

[...]

>> You make your choices, and you pay the cost of the choices.
> 
> Now THIS I totally agree with.  The programmer is ultimately responsible
> for the program.  I leave you with a couple of koans from the Tao of

This is what I was saying before.  Must have been some coffee that
diffused faster than the rest.

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 734 786 8452 or +1 866 888 3112
cell : +1 734 612 4615



More information about the Beowulf mailing list