[Beowulf] [EXTERNAL] HPC demo

Lux, Jim (US 337K) james.p.lux at jpl.nasa.gov
Mon Jan 13 17:12:21 PST 2020


Back when (we are talking turn of the century), a standard MPI demo was a Mandelbrot demo with graphics output.

Do you need a tightly coupled demo, or is something that has a bunch of embarrassingly parallel jobs that either run serially or in parallel good enough.  There’s a whole raft of video graphics rendering that fits in the latter category – you can either render all the frames sequentially on one processor, or in parallel on many.  (Blender/POV-ray). I think POVray has a mode where it farms out pieces of the image to each node.

If you want something that demonstrates tighter coupling – there’s probably some benchmarks that do large matrix operations that run faster with multiple nodes.

The classic example is a N-body gravity simulation – at each time step, you need to compute the gravitational attraction between all pairs (N(N-1)/2) and if you farm it across multiple nodes, there’s lots of internode communications.

Googling for “Limulus supercomputing” might find some good examples -

An old Beowulf list post:
Skylar Thompson skylar.thompson at gmail.com<mailto:beowulf%40beowulf.org?Subject=Re%3A%20%5BBeowulf%5D%20Demo-cluster%20ideas&In-Reply-To=%3C56B00800.4060208%40gmail.com%3E>
Mon Feb 1 17:36:00 PST 2016
·         Previous message: [Beowulf] Demo-cluster ideas<https://www.beowulf.org/pipermail/beowulf/2016-February/033481.html>
·         Next message: [Beowulf] Demo-cluster ideas<https://www.beowulf.org/pipermail/beowulf/2016-February/033485.html>
·         Messages sorted by: [ date ]<https://www.beowulf.org/pipermail/beowulf/2016-February/date.html#33482> [ thread ]<https://www.beowulf.org/pipermail/beowulf/2016-February/thread.html#33482> [ subject ]<https://www.beowulf.org/pipermail/beowulf/2016-February/subject.html#33482> [ author ]<https://www.beowulf.org/pipermail/beowulf/2016-February/author.html#33482>
________________________________

Hi Olli-Pekka,



When we have LittleFe (http://littlefe.net<http://littlefe.net/>) out in the wild (which

sounds a lot like what you're trying to do!), GalaxSee and Game of Life

are two favorites:



http://shodor.org/petascale/materials/UPModules/NBody/

http://shodor.org/petascale/materials/UPModules/GameOfLife/



They're simple enough to be understandable, are visual so even if you

don't grok the algorithm right away you can still get something out of

it, but still complex enough to be a good HPC show-case.



Matlab will make use of multiple nodes if you have the parallel workshop toolbox, and they’ve got some demos.  OTOH, Matlab is kind of pricey, if you don’t already have it.

Jim Lux
Jet Propulsion Lab


From: Beowulf <beowulf-bounces at beowulf.org> on behalf of John McCulloch <johnm at pcpcdirect.com>
Date: Monday, January 13, 2020 at 4:35 PM
To: "beowulf at beowulf.org" <beowulf at beowulf.org>
Subject: [EXTERNAL] [Beowulf] HPC demo

I recently inherited management of a cluster and my knowledge is limited to a bit of Red Hat. I need to figure out a demo for upper management graphically demonstrating the speed up of running a parallel app on one x86 node versus multiple nodes up to 36. They have dual Gold 6132 procs and Mellanox EDR interconnect. Any suggestions would be appreciated.

Respectfully,
John McCulloch | PCPC Direct, Ltd.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200114/dfd671b0/attachment-0001.html>


More information about the Beowulf mailing list