[Beowulf] thermal/power limits
Lux, Jim (337C)
james.p.lux at jpl.nasa.gov
Mon Aug 12 10:09:01 PDT 2013
And this is precisely why I like MPI based solutions (or message passing in general). It forces the software architecture to explicitly decouple the threads in a timing sense (none of that "we'll use a shared memory semaphore" stuff) so it tends to be more easily scaled/ported to other architectures.
However, it is a BIG conceptual jump in architecture for most applications that aren't in the embarrassingly parallel, just fire off a parallel thread kind of bucket.
It's comparable to the fear and trepidation inspired by asynchronous logic designs. Harder to prove correct, etc.
From: Douglas Eadline [mailto:deadline at eadline.org]
Sent: Monday, August 12, 2013 9:35 AM
To: Lux, Jim (337C)
Cc: beowulf at beowulf.org
Subject: Re: [Beowulf] thermal/power limits
> Potentially, of course, once you bite the bullet to parallelize, and
> you do it in a scalable manner, then, you can presumably scale to
> architectures where you have N cores running at full speed (e.g. A
> classic cluster). I wonder, though, whether the end-user applications
> codes actually do that, or whether they design for the "single user on
> a single box" model. That is, they design to use multiple cores in
> the same box,but don't really design for multiple boxes, in terms of
> concurrency, latency between nodes, etc.
This in my mind is not an easy question to answer. Assuming an application can use more cores in a scalable fashion, the issue with SMP multi-core is how many effective cores you get vs actual -- due to memory contention.
In my tests "it all depends on the application"
One of the nice things about MPI codes is the ability to run on 16 separate nodes, one 16-way node, or anything in between.
OpenMP has no way to get off the motherboard, but soon will open the door the onboard SIMD units. OpenMP does not guarantee an automatic win over MPI on multi-core either.
> James Lux, P.E.
> Task Manager, FINDER - Finding Individuals for Disaster and Emergency
> Response Co-Principal Investigator, SCaN Testbed (née CoNNeCT) Project
> Jet Propulsion Laboratory
> 4800 Oak Grove Drive, MS 161-213
> Pasadena CA 91109
> Mailscanner: Clean
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing To change your subscription (digest mode or unsubscribe)
> visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf