[Beowulf] IBM's Watson on Jeopardy tonight
Lux, Jim (337C)
james.p.lux at jpl.nasa.gov
Wed Feb 16 07:20:42 PST 2011
Aside from how brains are "programmed" (a fascinating question)
One big difference between biological systems and non-bio is the handling of errors and faults. Biological systems tend to assume that (many) failures occur and use a strategy of massive redundancy with fuzzy collective action. To date, there's been very little work on large scale computing systems that are more than just fault tolerant. Partly it's the fact that the problem is difficult. partly it's a market driven thing: the people who want to anticipate and deal with faults tend to want "provably failure tolerant" so that drives things like error correcting codes, Triple Modular Redundancy (TMR) and a whole host of schemes for failover, hot standby, etc.
Our friends at google are probably one of the best examples of a large computing environment which is very aware of "good enough" and which has enough scale to know that at any given time, some fraction of their computers are dead/wrong/faulted.
But even there, the redundancy is at a pretty high level, compared to the very fine grained redundancy in any biological system.
Even very simple biological systems with very simple individual cellular algorithms/control rules display emergent behavior which is quite complex. Consider the recent article about the annual plague of Portuguese Man o'Wars in Florida. (A floating colony of different animals, resembling a jellyfish, but actually not one organism)
From: beowulf-bounces at beowulf.org [beowulf-bounces at beowulf.org] On Behalf Of ariel sabiguero yawelak [asabigue at fing.edu.uy]
Sent: Wednesday, February 16, 2011 06:52
To: beowulf at beowulf.org
Subject: Re: [Beowulf] IBM's Watson on Jeopardy tonight
I believe that it is because, on the one hand, we don't accept fuzzy
results, and on the other hand, we don't know how to train the millions
of ANNs required to mimic a mammal's brain.
The way in which biology deals with failures, faults and defects is far
beyond our full comprehension. How to program a brain? and similar
questions are beyond our grasp too, yet, we re-program ourselves day
after day intuitively.
In some way, the speed at which multicores are evolving (more and
simpler cores instead of a single, yet powerful one) indicates that
parallel processing is the way to go -I think I read it somewhere in
this list-. Maybe the answer that the evolution found for carbon-based
processing is different than the one for future silicon-based life forms
(or at least, intelligent processing). A dead company used to say "the
computer is the network", and for our brains it seems so. Will it be
true for really-massive processors? Will they shrink until they only sum
and bias inputs without any programing inside? Will we discuss
multi-million-core-SMP? If so, I doubt that it will be single-bus-based
I'm not sure I'll live until we find an answer, but is a nice long term
El 16/02/11 12:29, "C. Bergström" escribió:
> Lux, Jim (337C) wrote:
>> I think it will be a while before a machine has the wide span of capabilities of a human (particularly in terms of the ability to manipulate the surroundings), and, as someone pointed out the energy consumption is quite different (as is the underlying computational rate... lots of fairly slow neurons with lots of parallelism vs relatively few really fast transistors)
> Doesn't this then raise the question of why we aren't modeling computers
> and programming models after the brain? ;)
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf