[Beowulf] IBM Sequoia
Robert G. Brown
rgb at phy.duke.edu
Wed Feb 4 12:04:09 PST 2009
On Wed, 4 Feb 2009, Kilian CAVALOTTI wrote:
> On Wednesday 04 February 2009 15:08:12 Robert G. Brown wrote:
>> Or use slow, slow, slow processors (but a lot of
>> The latter isn't a crazy idea, depending on the kind of task this
>> faster-that-fastest system is supposed to be faster on. Some sort of
>> massively SIMD decomposable problem with minimal nonlocal IPCs where the
>> per-processor tasks are modest and nearly independent would get the
>> near-linear scaling required to use up 1.6 million cores, and it would
>> explain the 1 MB of memory per core. Consider each node as representing
>> (say) 10,000 neurons and you've got a 16 billion neuron neural net with
>> some sort of semilocal topology. Not bad, actually.
> Isn't that the idea behind SGI's Molecule concept?
which is actually kind of intesting (if slow).
This is something that is not completely orthogonal to what might be my
next research project, so I'm actually finding the current discussion
Robert G. Brown http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu
More information about the Beowulf