All Press Releases for September 26, 2010

One of Many Benefits of Influence Learning - A New Learning Algorithm for Neural Networks

A New Learning Algorithm Will Make Using Neural Networks In Autonomous Systems More Practical and Scalable.



    RIVERSIDE, NJ, September 26, 2010 /24-7PressRelease/ -- A newly developed neural network learning algorithm called Influence Learning, overcomes many of the shortcomings and limitations that plagued previous algorithms. A short, but informative entry explaining Influence Learning, and how it works can be found at the book site for: Netlab Loligo.

There has been very little overlap between what has been found to be practical and implementable in the field of robotics, and what has been offered in the area of neural network simulations. Sadly, it seems the most proven robots, even those that need to be autonomous, have been the ones that have avoided the use of neural-network models altogether.

When you look at DARPA's autonomous vehicle contests, for example, there were always one or two entries that attempted to use neural-network models. But the successful vehicles in those races were always the ones that used traditional computing algorithms.

"The cars in these races that were based purely on neural-networks would be a certain, and usually early fail" says John Repici, creator of the influence learning algorithm.

"It could just be personal bias" explains Mr. Repici, "but I think it would be preferable to have neural networks drive autonomous robots. Neural network tools, however, have been limited in ways that make them nearly useless when they meet real-world applications."

So What's The Problem?

In a word, something Temple Grandin refers to as "abstractification". For the past decade or two, neural network models have been exercises in nearly pure abstraction, often attempting to compensate for the failings of an existing mathematical model, simply by adding "yet another layer" of computational complexity.

What we are left with, even after almost 20 years of research, are a crop of algorithms that::

- don't allow feedback, or severely limit feedback,

- don't allow more than a few layers of neurons, and

- limit the number of processors that can be used in parallel.

These are obviously and clearly the most limiting problems with the current crop of neural network tools, but you will be hard pressed to see them discussed in many papers touting innovations in neural networks. These basic limitations have become the proverbial elephant in the room that nobody wants to talk about.

Influence Learning takes on the above list of shortcomings directly, and resolves them. It is just one of two new (patent pending) learning algorithms described along with other innovations, in the book: Netlab Loligo.

Title: Netlab Loligo: New Approaches to Neural Network Simulation
ISBN:978-0984425600
Paperback:332 pages

Visit: StandOutPublishing.com

Website: http://standoutpublishing.com

# # #

Contact Information

John Repici
Stand Out Publishing
Riverside, NJ
U.S.
Voice: 609 519-4200
E-Mail: Email Us Here
Website: Visit Our Website