3D Robotics

The problem with neural networks

This, from the always interesting Jack Crossfire blog, is the smartest observation on neural networks and genetic algorithms I've seen in ages. Excerpt: "In our last experience with neural network feedback, the feedback always headed towards 0. That one used straight back propagation to predict cyclic from attitude change. That fiasco has cursed all future investments in the radically different genetic approach. The genetic approach would constantly evaluate the attitude tracking & standardize networks which track better. How do U keep it from forgetting high wind feedback when evolving in low wind?" To repeat: How do you keep it from forgetting high-wind feedback when evolving in low wind? As humans, we solve this with a mix of short term and long term memory. Do we need to create an analog in neural computing to avoid the problem Jack rightly spots?
E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • I work with fairly sparse training sets, so I think it would not be too difficult to balance old and new. I would have to experiment with this idea.
  • 3D Robotics
    That suggests that the idea of purely real-time learning neural network isn't practical. If you have to combine real-time learning with replaying old data sets (hmmm: real life vs dreams? Sorry to anthropomorphize!) it sounds like more trouble than it's worth.
  • You have to bear in mind that a neural network is just a statistical method that attempts to create a useful mapping between sensor input and control output. Its intelligence derives from the manner by which it is trained, and training data sets need to include old experiences as well as new. Otherwise, the network will tend to unlearn old lessons, which is an issue found with recurrent neural nets (a net that adapts in real time).
This reply was deleted.