Neural Network applications using Beowulf
Thomas Zheng
tzheng at qualcomm.com
Fri Nov 22 10:32:19 PST 2002
Hi Eray,
Very interesting perspective from a computer scientist. I was just reading
a book called Talking Nets: An oral history of neural networks. I highly
recommend this book to people who are interested in the origin and history
of neural networks for the past several decades. Let me tell ya, it was a
rough ride for the pioneers!
From what I read, the disagreement ( or grudges) between the CS and EE
camp on the subject of neural networks went way back! But I like
constructive disagreements.
Regards
At 11:00 AM 11/20/2002 +0200, you wrote:
>Hi Thomas,
>
>It is very hard to pin down what will pave the way for next generation AI
>research. In machine learning it is well known that there is no specific
>method that will be fitting for all domains. Sometimes an ANN will do the
>job, and most of the times it won't. You have to be very pragmatic. Of
>course, I'm very pleased with the information you've given. I'd be delighted
>to read the paper and evaluate its theoretical content.
>
>As we all know even a small MLFF ANN with back-propagation learning takes a
>very long time (with a scary time complexity) to converge, it is usually
>infeasible for larger networks and really hard problems. However, if
>Hecht-Nielsen's networks are comprised of thousands of artificial neurons
>then we could in principle parallelize the learning algorithm in a smart way.
>(Since we have n >> p, there is a practical way to partition) It then becomes
>a supercomputing problem. My concern is whether those networks are doing
>anything useful because in ANN research it is often the case that you are
>achieving a result that a simple ordinary algorithm could do in a much more
>efficient manner. In supercomputing efficiency becomes even a more important
>aspect because people don't build supercomputers for fun, they build them to
>solve what was not solvable before. (As we all appreciate)
>
>That is, one has to make sure that he is not doing ANNs for the sake of ANN,
>especially on a supercomputer. If you're an AI researcher you should also
>have noticed that most ANN or GA/hybrid systems research focuses on getting
>results for a problem that could be solved better with existing algorithms.
>That is why I'm a little skeptical of the quality of a work that bears the
>"ANN" label. I think this puts me on the harder edge of algorithms world, but
>that's what I think :>
>
>Nevertheless, in some cases, ANNs are better. For instance, when you are
>predicting the stock index in stock market time series. So if we have a
>problem that are better solved by Hecht-Nielsen's networks, and if they
>require a supercomputer it would prove to be a new challenge for
>supercomputing specialists.
>
>Regards,
>
>
>On Wednesday 20 November 2002 01:05 am, Thomas Zheng wrote:
> > Hi Eray,
> >
> > What you said was pretty much true until recently. In WCCI2002
> > (http://www.wcci2002.org/) this May, Dr. Hecht-Nielsen from Univeristy of
> > California, San Diego, announced his new thalamocortical information
> > processing theory, which, I think, is paving the way for next generation AI
> > research. In his special lecture, he showed a couple slides of
> > parallel-computing machines he used in his lab. Even though he never got
> > into the details of these machines, from what i know about associative
> > memory networks, which are the building blocks of his new theories, it does
> > demonstrate highly parallel-implementable features. And we are talking
> > about thousands of nodes as minimum requirements for these networks to be
> > functional.
> >
> > In my opinions, there are tremendous potentials for parallel computing in
> > the neural network arena. The question is what kind of practical/useful
> > applications would come out of it.
> >
> > Regards
> >
> >
> > Thomas Zheng
> >
> > At 09:40 AM 11/14/2002 +0200, you wrote:
> > >On Tuesday 12 November 2002 06:24, Robert G. Brown wrote:
> > > > I actually think that there is room to do a whole lot of interesting
> > > > research on this in the realm of Real Computer Science.
> > > >
> > > > Too bad I'm a physicist...;-)
> > >
> > >Note that most artificial neural network applications don't fall in the
> > > realm of supercomputing since they would be best suited to hardware
> > >implementations, or more commonly, serial software.
> > >
> > >
> > >We had discussed this with colleagues back at bilkent cs department and we
> > >could not find great research opportunities in this area. It is a little
> > >similar to stuff like parallel DFA/NFA systems. You first need an
> > > application to prove that there is need for problems of that magnitude
> > > (more than what a serial computer could solve!). What good is a
> > > supercomputer for an artificial neural network that is comprised of just
> > > 20 nodes?
> > >
> > >If of course somebody showed an application that did demand the power of a
> > >supercomputer it would be very different, then we would get all of our
> > >combinatorial tools to partition the computational space and parallelize
> > >whatever algorithm there is :)
> > >
> > >Neural networks being Turing-complete, I assume such a network would bear
> > > an arrangement radically different from the "multi-layer feed-forward"
> > > networks that EE people seem to be obsessed with. I have lost my interest
> > > in that area since they don't seem to demand parallel systems and they
> > > are not biologically plausible.
> > >
> > >Regards,
> > >
> > >--
> > >Eray Ozkural (exa) <erayo at cs.bilkent.edu.tr>
> > >Comp. Sci. Dept., Bilkent University, Ankara
> > >www: http://www.cs.bilkent.edu.tr/~erayo Malfunction:
> > > http://mp3.com/ariza GPG public key fingerprint: 360C 852F 88B0 A745 F31B
> > > EA0F 7C07 AE16 874D 539C
> >
> > Regards,
> > Thomas Zheng
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit
> > http://www.beowulf.org/mailman/listinfo/beowulf
>
>--
>Eray Ozkural
>GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16 874D 539C
Regards,
Thomas Zheng
More information about the Beowulf
mailing list