, from the always interesting Jack Crossfire blog, is the smartest observation on neural networks and genetic algorithms I've seen in ages. Excerpt:
"In our last experience with neural network feedback, the feedback always headed towards 0. That one used straight back propagation to predict cyclic from attitude change. That fiasco has cursed all future investments in the radically different genetic approach. The genetic approach would constantly evaluate the attitude tracking & standardize networks which track better. How do U keep it from forgetting high wind feedback when evolving in low wind?"
To repeat: How do you keep it from forgetting high-wind feedback when evolving in low wind?
As humans, we solve this with a mix of short term and long term memory. Do we need to create an analog in neural computing to avoid the problem Jack rightly spots?