[Beowulf] Wired article about Go machine

Ellis Wilson xclski at yahoo.com
Wed Mar 18 17:47:35 PDT 2009


Peter St. John wrote:
> This article at Wired is about Go playing computers:
> http://blog.wired.com/wiredscience/2009/03/gobrain.html
> Includes a pic of a 24 node cluster at Santa Cruz, and a YouTube video of a
> famous game set to music :-)
> 
> My beef, which started with Ken Thompson saying he was disappointed by how
> little we learned about human cognition from chess computers, is about
> statements like this:
> 
> "People hoped that if we had a strong Go program, it would teach us how our
> minds work. But that's not the case," said Bob
> Hearn<http://www.dartmouth.edu/%7Erah/>,
> a Dartmouth College artificial intelligence programmer. "We just threw brute
> force at a program we thought required intellect."
> 
> And yet the article points out:
> 
> [our brain is an]...efficiently configured biological processor — sporting
> 1015 neural connections, capable of 1016 calculations per second
> 
> Our brains do brute-force massively distributed computing. We just aren't
> conscious of most of it.
> 
> Peter

Peter,

I would agree with Ken in that it is a disappointing and ultimately 
fruitless process to attempt to learn about human cognition by building 
a program to emulate some very specific activity of human beings.  This 
line of thought, in its purest sense, is reductionism.  While I do find 
artificial intelligence to be very interesting, I believe at some point 
or another we will have to recognize that the brain (and our subsequent 
existence) is something more than the result of the perceivable atoms 
therein.  No viewpoint is completely objective as long as we are finite 
human-beings and occupy a place in the world we perceive.

To say that all simulation of some portion of our thoughts is fruitless 
is incorrect, as I think some insight into the mind is possible through 
codifying thought.  However, there exist far to many catch-22's and 
logical fallacies in using the mind to understand the mind to ever fully 
understand how it works from a scientific point of view.  Philosophy 
will at some point have to step in to explain the (possibly huge) gaps 
between even the future's fastest simulated brains and our own.

In a book by Thamas Nagel, "The View from Nowhere" I believe he puts it 
most poignantly by stating, "Eventually, I believe, current attempts to 
understand the mind by analogy with man-made computers that can perform 
superbly some of the same external tasks as conscious beings will be 
recognized as a gigantic waste of time".  This was written over twenty 
years ago.  Science has given us tools to make our lives wonderfully 
easier and thereby has proven to be useful, but it answers none of the 
multitude of mind-body dilemmas, validates the reality of our 
perception, nor will it or any other reductionist theory provide insight 
into the much more complex areas of cognition.  This is especially true 
with the discovery of quantum mechanics, which makes the observer's 
subjective perception absolutely necessary.  Full objectivity (or in 
this application full codification of human thought) just isn't possible.

I wish it weren't so, for by study I am a computer scientist and by 
hobby philosopher, however, at present I remain skeptical.

Ellis Wilson






      




More information about the Beowulf mailing list