[Beowulf] FY;) GROMACS on the Raspberry Pi
prentice.bisbal at rutgers.edu
Thu Sep 20 11:25:37 PDT 2012
I have a good one: generate a mandelbrot fractal. It's interesting
because you can see it move through iterations faster as you add more
processors to it. Of course, this means you need to ssh into the head
node from a system with X-windows, and be able to run parallel jobs
interactively. I remember seeing a demo of the first Linux cluster I saw
IRL using this, and it was a homework assignment in my parallel
programming class years ago.
Google "parallel fractal generator", and you should find a bunch of its.
On 09/19/2012 04:57 PM, Lux, Jim (337C) wrote:
> Bringing up an excellent question for "learning to cluster" activities..
> What would be a good sample program to try. There was (is?) a MPI version of PoVRAY, as I recall. It was nice because it's showy and you can easily see if you're getting a speedup.
> Computing pi isn't very dramatic, especially since most people don't have a feel for how fast it should run.
> Some sort of n-body code, perhaps?
> Something that does pattern matching?
> There's a lot of MPI enabled finite element codes, but a lot don't have a flashy output.
> And you'd like something that actually makes use of internode communication in a meaningful way (because you could play with reconfiguring it, by plugging and unplugging cables), so embarrassingly parallel isn't as impressive. (e.g. rendering frames of an animation.. so what if you do it 10 times faster with 10 computers)
> Jim Lux
> -----Original Message-----
> From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Bogdan Costescu
> Sent: Wednesday, September 19, 2012 3:33 AM
> To: Daniel Kidger
> Cc: Beowulf at beowulf.org
> Subject: Re: [Beowulf] FY;) GROMACS on the Raspberry Pi
> On Tue, Sep 18, 2012 at 10:10 AM, Daniel Kidger<daniel.kidger at gmail.com> wrote:
>> I touched on the Gromacs port to ClearSpeed when I worked there - I
>> then went on to write the port of AMBER to CS plus I have a pair of
>> RPis that I tinker with.
> I'm not quite sure what the interest is... GROMACS is quite famous for having non-bonded kernels written in assembler and using features of the modern CPUs, but this is limited to some<snip>
> and will have a larger power consumption; plus with so many components, the risk of one or more breaking and reducing the overall compute power is quite high. So is it worth it ?
> (as a scientist I look at it from the perspective of getting useful results from calculations; as a learning experience, it's surely useful, but then running any software using MPI would be)
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf