[Beowulf] FY;) GROMACS on the Raspberry Pi

Bogdan Costescu bcostescu at gmail.com
Wed Sep 19 03:33:21 PDT 2012


On Tue, Sep 18, 2012 at 10:10 AM, Daniel Kidger <daniel.kidger at gmail.com> wrote:
> I touched on the Gromacs port to ClearSpeed when I worked there - I then
> went on to write the port of AMBER to CS
> plus I have a pair of RPis that I tinker with.

I'm not quite sure what the interest is... GROMACS is quite famous for
having non-bonded kernels written in assembler and using features of
the modern CPUs, but this is limited to some architectures: IA32
SSE/SSE2, x86_64 SSE/SSE2, IA64 and some versions of the Power
processors. There is work in progress to support AVX and FMA4. But the
RPi uses an ARM CPU. GROMACS should compile and run on the RPi, but
using its generic kernels written in C which are around 2x slower than
the optimized ones on the same CPU. So you combine the slow C kernels
with a slow CPU... A porting effort would make sense if there would be
ARM-specific instructions which are not used by default by the
compiler and which are geared towards memory streaming access,
simultaneous floating point operations, etc. like SSE and successors.
Are there any ?

Furthermore, the new(-ish) versions of GROMACS (4.x) use domain
decomposition which requires a low latency interconnect to achieve a
good scalability. From what I know, the earlier particle decomposition
(still available in the new versions, but almost never used) was less
demanding in terms of network. But maybe the computation is so slow
that the on-board Ethernet is fast enough to keep up... Anyway, the
whole cluster is probably going to run the simulation slower than a
4-6 cores Ivy Bridge CPU and will have a larger power consumption;
plus with so many components, the risk of one or more breaking and
reducing the overall compute power is quite high. So is it worth it ?
(as a scientist I look at it from the perspective of getting useful
results from calculations; as a learning experience, it's surely
useful, but then running any software using MPI would be)

Cheers,
Bogdan



More information about the Beowulf mailing list