[Beowulf] IBM's Watson on Jeopardy tonight

Robert G. Brown rgb at phy.duke.edu
Wed Feb 16 10:06:35 PST 2011

On Wed, 16 Feb 2011, "C. Bergström" wrote:

>> We are, but that problem is, well, "hard".  As in grand challenge hard.
> I wonder how you'd really describe the human brain learning in terms of a 
> programming model....

Well, AI researchers tend to have two answers.  One is semantic and the
other is microscopic.  The semantic description is
functional/operational, broken down in terms of e.g. conditionals and
logical elements, and doesn't come close to explaining consciousness
(see Searle's "Chinese Room" objection):


The microscopic model is basically neural networks, but NNAI hasn't made
it much past the level of a flatworm or an ant in AI-based models.  NNs
are marvelously useful nonlinear function approximators and hence are
quite capable of building Bayesian reasoning processes through inference
(if appropriately architected) but there the results are semantically or
functionally much like the human brain, its more like we think that SOME
parts of what the human brain does are SOMEWHAT mediated by networks
that have enormous structure (most of it occult) that does all sorts of
things (most of them unknown) to produce results (that nobody can
observe from the OUTSIDE of the brain save highly indirectly, with a nod
to FMRI and certain wetware neural implants which recently have had some
success in studying dynamic brain funciton in situ as near-exceptions).

Like I said, a hard problem, both philosophically, mathematically,
statistically, computationally, biologically, psychologically, and
semantically/information theoretically.  Partly because the solution
involves at least all of these fields in synthesis (and, if you are a
religious sort, you can throw in idle speculation about higher
dimensional Universes where some fraction of "intelligence" resides in
and/or utilizes physics in the additional dimensions).

>> There are other problems -- brains are highly non-deterministic (in that
>> they often selectively amplify tiny nearly random signals, as in "having
>> an idea" or "reacting completely differently depending on our hormonal
>> state, our immediate past history, and the phase of the moon").  Brains
>> are extremely non-Markovian with all sorts of multiple-time-scale memory
>> and with plenty of functional structures we honestly don't understand
>> much at all.  We don't even know how brains ENCODE memory -- when I've
>> got Sheryl Crow running through my head in what SEEM to be remarkably
>> good fidelity,
> I wonder just how good it is?  If I had to guess I'd say pretty bad.  (no 
> offense)  I wonder how much actual "space" it takes up.  I'd bet from a 
> physical size perspective we're possibly ahead of nature in terms of data 
> storage density.  (Without the packaging/cases.. etc)

It's actually damn good.  I can't remember the exact numbers and have to
teach and can't look them up, but my recollection is that the total
sensory bandwidth and bandwidth of the memory chanels in the human brain
are estimated at terabits -- just think of your visual field.  One thing
that is clear is that the brain is extremely efficient at information
storage and representation, using all sorts of tricks to compress
information.  However, it is highly error prone.

> I think motor functions are better understood than pure thought.

Sure, and they are rather boring.  Circuitry to activate a mechanical
process is straightforward (although sometimes nontrivial).  But where
the impulses come from to intentionally activate motor function, ah,
that's one of the many rubs.

>> Humans can "almost" remember somebody's name one day, for example, and
>> then another day know it immediately,
> cache hit
>> and another day still not even
>> recognize that they once knew it.
> cache miss

But it isn't that simple.  Not at all.  First of all, there is no cache,
this is all long term storage retrieval as the brain's immediate, short
and intermediate term memory (where immediate is the only one that is
arguably "cache") is all differentiated on much shorter timescales than
weeks or months.  Second of all, human retrieval is in some sense
associative and spontaneous, rather than a lookup process.  We KNOW how
computers look things up, and it is nothing at all like the way people
remember things.  Human memory is perhaps very slightly like using
hashes to store and retrieve information in computing, but in another
sense it is nothing at all like it.  The computer, one way or another,
stores a datum >>at a location<< and retrieves it by establishing a map
to the location.  The brain does not.  You do not have a location in
your brain that corresponds to your memory of e.g. what you ate for
dinner last night, or the value of pi.  That information is stored all
over the place, and the neurons that help store it may well be
simultaneously helping to store other information at the same time.
There also isn't a single pathway to that information (enter the correct
hash key, decode it into an address) -- there are multiple pathways and
some of them can be triggered by irrelevant stimuli.

There is also enormous variability in ability and function.  Some humans
remember names like magic, and can remember thousands of people by name.
Others (like me) have to think hard to remember the names of my own kids
and nephews and neices, are hopeless with student names.  Which isn't a
function of intelligence -- politicians are often great at names and
dumb as oxen, and I'm, well, a physicists/polymath sort of guy who STILL
can't remember the value of e and who has to derive everything he
teaches because it is EASIER than trying to remember it all.

Don't get me wrong -- I think we have true AI about licked, but I think
semantic AI is a false pathway to a Chinese Room.


Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

More information about the Beowulf mailing list