[Beowulf] thermal/power limits

Lux, Jim (337C) james.p.lux at jpl.nasa.gov
Mon Aug 12 07:01:17 PDT 2013

Interesting how in the consumer PC world, they're starting to realize the challenge of effectively parallelizing.  This article talks about the whole cores vs speed thing, since they theorize a power dissipation limit results in speed*#of cores = constant.


"Many applications still only max out one or two cores effectively, so for most usage, a higher clock speed is better than more cores if you can’t have both. But for highly parallelizable tasks, such as video processing, 3D rendering, and scientific research,…"
"And for all of those applications that don’t parallelize well (hi, Adobe and LAME<http://lame.sourceforge.net/>!), the higher-core, lower-clocked, more-expensive CPUs will probably perform worse than the cheaper, fewer-core, higher-clocked ones."

Amdahl's law strikes again <grin>
And I wonder how many of those video processing, 3D rendering, and scientific research tasks actually have off the shelf user applications that can effectively use multiple cores?  Not everyone is coding up their own solutions, particularly for a MacPro (the subject of the article).  I'm also not sure that parallelizing into N cores running at X/N clock rate is faster than running 1 core at X clock rate.  If you are rendering animation frames (and for a lot of Finite Element codes)  there's a certain number of arithmetic operations to be done to get the job done, and whether you do N parallel streams at 1/N rate or 1 stream at full rate doesn't matter.

Potentially, of course, once you bite the bullet to parallelize, and you do it in a scalable manner, then, you can presumably scale to architectures where you have N cores running at full speed (e.g. A classic cluster).  I wonder, though, whether the end-user applications codes actually do that, or whether they design for the "single user on a single box" model.  That is, they design to use multiple cores in the same box,but don't really design for multiple boxes, in terms of concurrency, latency between nodes, etc.

James Lux, P.E.
Task Manager, FINDER – Finding Individuals for Disaster and Emergency Response
Co-Principal Investigator, SCaN Testbed (née CoNNeCT) Project
Jet Propulsion Laboratory
4800 Oak Grove Drive, MS 161-213
Pasadena CA 91109

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20130812/7d80f639/attachment.html>

More information about the Beowulf mailing list