[Beowulf] Java vs C++ for interfacing to parallel library
Joe Landman
landman at scalableinformatics.com
Tue Aug 22 03:14:55 PDT 2006
Robert G. Brown wrote:
> On Mon, 21 Aug 2006, Joe Landman wrote:
>
>> I took a simple GSL program I used to introduce students to GSL, that
>> was a modified example from one of the GSL example files. Basically a
>> little Hooke's law bit to use as input to an LU solver. Really short
>> GSL program.
>
> Joe,
>
> Since you clearly have time on your hands and a mission (a GOOD thing,
Time, ... no. I thought it could be done quickly based on
discussions/past experience with other things. Mission? I dunno :) I
have other projects of fairly high priority. When you have a business,
priority always goes towards the things that put food on the table,
everything else is icing.
> mind you:-) you might try SWIG (link on the previous post). It looked
> to be relatively easy to use, and MIGHT manage heavy lifting. If you
> really want to try to hook the GSL into perl, that is (an interesting
> proposition, actually, but I think a pretty major project and an
> associated long-term maintenance problem unless you want to port it and
> forget it).
Someone did something like this with PDL, so it is possible that most of
the work has been done.
I am in the middle of a number of paying projects, so the non-paying
ones take a back seat.
>> What I discovered is that it doesn't take much to make this not work.
>> :( Specifically passing arrays and vectors back and forth between
>> C/Perl is hard (IMO).
>
> Well, remember that in the GSL arrays and vectors and so on are
> themselves often, nay usually, structs and not just flat memory blocks
> packed into standard **...pointer sets. I actual worked through them
> all once upon a time when trying to develop and extension of the GSL
> "matrix" type called "tensor" -- IIRC a matrix in the GSL is a block
> (itself a structured type) with additional metadata, and is something of
> an opaque data type where you have to (or rather, given that it is C,
> are "supposed to") access components via get/set functions from the
> OUTside. It wasn't terribly extensible to a generalized **..***tensor
All OO (and GSL is OO in its interface) really want you to go through
their accessors. Basically where I got stuck was that I had to write
the retrieval dereferencer for a Perl object in C, and then store that
in GSL. Thought I could do that quickly, 1/2 hour in and I punted :)
> type of even moderate dimension, so I redid a lot of the basic pieces to
> flatten them out and make them a bit less objectoid and end up with a
> gsl-ish tensor type up to the 8th or 9th rank. Alas, the change would
> have broken too many functions (I guess) and was not warmly embraced, so
> the GSL still doesn't have a proper tensor beyond second rank.
The beauty of OO is that you could in theory change the underlying basic
types and not break the method. The danger is that it adds so much
abstraction that data manipulation gets real slow. Like going through
an accessor function.
Something I was thinking about yesterday were ways to mitigate this.
Basically for arrays, you can (carefully) construct a good C array, and
layer metadata in a separate array, and create a "high speed" accessor
interface to provide effectively the normal (*..*) array semantics. I
think it is a solvable problem.
> The real question about SWIG and friends is where one can draw the line.
> Any decent library has an API and its own data structs and macros and
> prototypes and so on. Most of these types are available to the end user
> only via #include files -- the library itself is simply relocatable code
> designed to pull those objects out of the right places when routines
> within it are called. An encapsulation program has to be ALMOST as
> smart as a compiler/linker, then -- it has to be able to parse out
> arbitrary data types from C source and map them not-too-badly into perl
> compatible memory constructs. This is NOT necessarily easy, since I
> might well have something like:
>
> typedef struct {
> int xdim;
> int ydim;
> int zdim;
> double ***tensor;
> } MyTensor;
>
> and might want to encapsulate:
>
> MyTensor *newtensor(int xdim; int ydim; int zdim)
> {
> int i,j;
> MyTensor *tensor;
>
> tensor = (MyTensor *)malloc(sizeof(MyTensor));
>
> tensor->xdim = xdim;
> tensor->tensor = (double ***)malloc(xdim*sizeof(double **));
> for(i=0;i<xdim;i++){
> tensor->tensor[i] = (double **)malloc(ydim*sizeof(double *));
> for(j=0;j<ydim;j++){
> tensor->tensor[i][j] = (double *)malloc(zdim*sizeof(double));
> }
> }
> return(tensor);
> }
>
> (with no error checking or initialization yet, of course). This is an
> obviously useful construct, although I would generally allocate all the
> actual memory for the tensor in a single block and do displacement
> arithmetic to pack the pointers into tensor->tensor[i][j] instead of the
> last malloc to ensure a contiguous block and retain the ability to pass
> the entire array as a (void *) pointer to a block of data.
>
> What is SWIG/perl supposed to do with this?
Well, you can create typemaps in perl which tell it what to do with it.
This is where you would need some metadata magic on the SV/AV types to
make sense of this in perl. And you are right, this might be where you
could exploit SWIG to do the heavy lifting.
> What would perl do if I
> allocated a VECTOR of MyTensor **tensors so that dereferencing a
> particular element would look like tensor[i]->tensor[j][k][l] with loops
> that ran from 0 to tensor[i]->xdim etc.? If this isn't bad enough, what
> would perl do with a linked list (another common enough construct),
> especially a SPECIALIZED linked list whose members contained entire data
> structs?
Hmmm. Perl (and the other dynamic languages) are smart enough to deal
with arrays/hashes of anything, including other arrays and hashes. So
if you wanted to create what you are talking about there in pure perl
package tensor;
use base qw(Class::Accessor);
1;
use tensor;
$tensor = tensor->new(
xdim => 10,
ydim => 10,
zdim => 10
);
$tensor->{tensor} = tensor->new;
for($i=0;$i<$I_MAX;$i++)
for($j=0;$j<$tensor->{xdim};$j++)
for($k=0;$k<$tensor->{ydim};$k++)
for($l=0;$l<$tensor->{zdim};$l++)
{
# this parses to something like
# array reference of object tensor->{tensor}->{tensor} indexed by
$i,$j,$k, is set to the result of calling epsilon on the arguments $i,
$j, $k.
@{$tensor->{tensor}->{tensor}} [$i][$j][$k] = &epsilon($i,$j,$k);
}
or something like that. Coding before coffee, always dangerous ...
Probably some bugs in my tensor.
> Could be it would do exactly the right thing, but even guessing what
> that thing might be requires a bit of knowledge about perl's internals.
> A really good encapsulation has to be as smart as most compilers, OR the
> encapsulating programmer has to be even smarter. This doesn't mean that
> it cannot be done and that some people aren't that smart. It's just
> that one has to REALLY want to do it to make it worthwhile.
Yeah, wanting to is part of it. The other part is that one must have
the time.
>
>>
>> Since I don't do this on a regular basis, this isn't so bad. Also there
>> is this PDL thing
>> (http://search.cpan.org/~csoe/PDL-2.4.3/Basic/Pod/Impatient.pod) which
>> doesn't look so bad, but it still doesn't solve the issues I want solved.
>>
>>>> Only when you have some ... odd ... structures or objects passing back
>>>> and forth which require a bit more work.
>>>
>>> What's an odd structure?
>>
>> As I have discovered ... odd structures are arrays ... and anything more
>> complex :(
>>
>> [...]
>>
>>>> Python has similar facilities. Generally speaking the dynamic
>>>> languanges (Perl, Python, Ruby) are pretty easy to wrap around things
>>>> and link with other stuff, as long as the API/data structures are
>>>> pretty
>>>> clean.
>>>
>>> Ay, that's the rub...;-) That and what you consider "pretty easy"...:-)
>>
>> Less time than I spent on this so far !
>
> <A_C_love_story>
>
> O-ohh say, can you C?
>
> One of the (many) things that I truly love about C is that one retains
> precise control over how memory is used in a well-written C program.
> Nothing opaque about it, really. I can close my eyes and visualize just
> how the block of memory representing MyTensor above looks, for example.
> I know just what the offsets are (in absolute void * terms) from the
> starting address of a MyTensor object to any of its contents, and CAN
> arrange for a MyTensor object and all its contents to definitely be
> allocated as a single continguous block of memory to avoid
> performance-sapping fragmentation without "hoping" that the compiler and
> kernel will provide it for me.
>
> With that degree of control I can then do very specific things with loop
> blocking relative to cache size or I can create data objects that are
> like nothing any fortran programmer ever dreamed of but that perfectly
> describe the actual optimal data objects of the task at hand. Using
> void types I can move anonymous blocks of memory around as I need to and
> defeat the well-meant but sometimes stultifying "rules" associated with
> manipulating ordinary typed variables.
>
> Finally, I can (given all of this information and control) CHOOSE to
> treat the resulting object as an opaque/protected data type and only
> create, destroy, or access contents of the object via provided
> functions, I can CHOOSE to create structs with members that are structs,
> I can CHOOSE to closely emulate C++ programming style or even Fortran
> programming style (if I want to treat the C compiler as if it had
> recently had a lobotomy:-). This isn't intended to disrespect those
> other compilers -- they are both have virtues that make them desireable
> to at least certain kinds of programmers for certain kinds of programs.
> They just don't generally give you the same degree of control, at least
> if you use them the way God intended and don't defeat their strong
> typing and error checking.
>
> With assembler I would have -- slightly -- more precise control. But
> not much. Not much.
Actually you have far more control over what gets emitted in the
instruction sequence. Sometimes this is a very good thing. There are
some loops in C that are really better off in SSE, though nothing you
can do to the compilers will convince them of this. So you rewrite
those loops in assembler.
This is ... fun (e.g. best left to grad students who like the challenge,
and arent focusing upon the drudgery of writing hand coded assembly).
Unfortunately, my time to do R&D and pursue fun projects is bounded, as
my time needs to be spent on those things that generate revenue. This
is either a blessing or a curse, I prefer to think about it as the
former. If we get enough revenue then we have time to "fund" research
(e.g. spend time on things other than revenue projects and product R&D).
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
phone: +1 734 786 8423
fax : +1 734 786 8452 or +1 866 888 3112
cell : +1 734 612 4615
More information about the Beowulf
mailing list