[Beowulf] FY;) GROMACS on the Raspberry Pi

Bill Broadley bill at cse.ucdavis.edu
Wed Sep 19 15:15:30 PDT 2012


I taught a MPI class a few times and wanted something simple, fun, and
could be improved upon several times as the students learned MPI.  It's
obviously embarrassingly parallel, but non-trivial to do well.  There's
often not enough work per pixel or per image to make the communications
overhead low.  Additionally the work per pixel varies widely which
prevents trivial division of the work.  The output is easy to test for
correctness, and you get a pretty picture as a result.

I came up with:

Project 1 - send a pixel coordinate to each CPU, learn basic
            send/receive.  How does it scale at high iteration counts?
            Low?

Project 2 - Send rows, how does it scale?

Project 3 - Use IReceive/Isend for non-blocking send/receive

Project 4 - Client side queue to minimizing ever running dry.
            MPI_ANY_SOURCE for the server.

I was pleased that about 25% of the class actually parallelized a
research code of their own by the end of the class.  From my completely
biased point of view the students thought the class was fun and useful.




More information about the Beowulf mailing list