Jack Crossfire's daily updates on his autonomous helicopter project are alway fascinating, but today's is even more impressive than most. It describes how he uses a neural network to take the inputs from his acceleromters, gyros, magnetometers and GPS and turns that all into an output to the heli that keeps it stable in any direction.
Here are some more diagrams, but check out the full post to also see his analysis of how cheap real-time GPS is getting and the possibility of using cheap optical mouse sensors for position hold.
I took an Artificial Intelligence class back in undergrad (got a lousy grade so take my opinions with a grain of salt). The routines are usually dead simple to implement. It's a question of getting a really good training set to reproduce the model. A PID with a NN is definitely something to hack up in a weekend, but getting it really dialed in is the hard part.
I definitely wouldn't trust a 100% ANN autopilot to fly my UAV (yet). The uncertainty of what kind of maneuver that the ANN might execute is a bit scary to me. Crashing the UAV is pretty much the last thing I want to do. However, they are an incredibly useful tool for filtering real world data and making certain decisions from that data.
Lately I've been experimenting with using a combination of expert systems and ANNs. I use expert systems define certain behaviors that are safe and predictable. The ANN simply filters sensor data and decides which behavior should be executed.
For example, I have an expert system on a UAV that defines 2 concrete behaviors: turn left and turn right. The noisy sensor data is fed directly into a ANN, which filters the data then decides whether it needs to turn left or right. The expert system then decides whether or not the maneuver is safe, and acts accordingly. This doesn't utilize the full power of NNs but it does eliminate a lot of the uncertainty in using them.
f you're running a quad-core 3GHz processor, you have too many computing cycles for anything useful besides video games, so a scripting language does a good job of slowing things down.
Maybe not real-world in the software business, but neural nets are definitely mainstream in robotics research. As an exercise or tutorial on how to apply a neural network to a control problem, PID might be the right level of complexity to tackle.
On the other hand, I just spent the past 20 minutes searching Google for a simple example of modeling PID with a NN, and never got close to finding anything at an introductory level. The best libraries seem to be in Python and Matlab, but that's pretty far removed from getting something running on an embedded controller.
Pretty incredible that they're doing back propagation in interpreted languages these days. The performance of interpreted languages has become so standardized, 1 program in C is all it would take to put Silicon Valley out of business.
Neural networks are horribly inefficient & aren't the real world in the software business. Stick to tried & true deterministic methods unless you absolutely can't solve the problem after 6 months of banging on it.
PID loops work great for a small number of inputs, but aren't so helpful when you start to integrate a lot of inputs (e.g. 6-DOF IMU + compass + GPS + other flight data).
A good project would be to model a simple PID with a neural network. One issue is that most neural nets use floating point math, but integer libraries do exist and might work as something as small as an Arduino. I have an integer NN library written in C built into the firmware of my Blackfin robots, so this is a subject in which I have a keen interest.
Jack's posts tend to be somewhat cryptic, but what he has illustrated is use of an artificial neural network as an alternative to kalman filter for the control loop of his inertial navigation system.
Neural nets and kalman filters are both statistical techniques that are intended to generate useful control signals from noisy input data. The advantage of a neural net is that it can work with an arbitrary set of inputs and outputs, whereas the kalman filter works from a dynamic model that explicitly defines the relationship between inputs and outputs. Computationally, the neural network is actually quite simple - it's just a collection of numeric "weights" that determine how much of each signal from each layer is fed forward to the next layer. The trick is in establishing the weights through a "training" process, generally conducted before the aircraft ever leaves the ground.
I haven't found much literature on neural nets that's particularly helpful in providing a starting point in understanding - most discussions are conducted at the grad student / post-doc level. My approach was to start with a simple neural network software library and build some simple networks to see how things worked. There are some accessible libraries in Python, e.g. http://pyrorobotics.org/?page=PyroModuleNeuralNetworks. I can dig up some other links, or maybe Jack has some suggestions. In any case, it's an interesting technique that actually dates back to the 1950's, progressed significantly in the 80's, and recently has made a strong comeback as the core technology for most evolutionary robotics projects.
The 324 pixel mouse sensor to which he refers is an 18 x 18 pixel array that takes a picture at up to 1500 frames per second, and is the core of most optical mice. Some researchers have taken these close range sensors and added a long range lens. The mouse sensor is a $1 chip (e.g. Agilent ADNS-2610), and there's a nice DIY article on hacking an optical mouse to create one of these "optical flow" sensors for robots - http://home.roadrunner.com/~maccody/robotics/croms-1/croms-1.html
Here's a nice technical paper about the application of these sensors to UAVs - www.ee.byu.edu/faculty/beard/papers/preprints/BarberGriffithsMcLain...
Comments
Lately I've been experimenting with using a combination of expert systems and ANNs. I use expert systems define certain behaviors that are safe and predictable. The ANN simply filters sensor data and decides which behavior should be executed.
For example, I have an expert system on a UAV that defines 2 concrete behaviors: turn left and turn right. The noisy sensor data is fed directly into a ANN, which filters the data then decides whether it needs to turn left or right. The expert system then decides whether or not the maneuver is safe, and acts accordingly. This doesn't utilize the full power of NNs but it does eliminate a lot of the uncertainty in using them.
Maybe not real-world in the software business, but neural nets are definitely mainstream in robotics research. As an exercise or tutorial on how to apply a neural network to a control problem, PID might be the right level of complexity to tackle.
On the other hand, I just spent the past 20 minutes searching Google for a simple example of modeling PID with a NN, and never got close to finding anything at an introductory level. The best libraries seem to be in Python and Matlab, but that's pretty far removed from getting something running on an embedded controller.
Neural networks are horribly inefficient & aren't the real world in the software business. Stick to tried & true deterministic methods unless you absolutely can't solve the problem after 6 months of banging on it.
A good project would be to model a simple PID with a neural network. One issue is that most neural nets use floating point math, but integer libraries do exist and might work as something as small as an Arduino. I have an integer NN library written in C built into the firmware of my Blackfin robots, so this is a subject in which I have a keen interest.
Many thanks for a really clear and helpful explanation!
What's your sense: are neural nets a good option for our level of IMU, or should we stick with the usual PI (and sometime D) loops?
--chris
Neural nets and kalman filters are both statistical techniques that are intended to generate useful control signals from noisy input data. The advantage of a neural net is that it can work with an arbitrary set of inputs and outputs, whereas the kalman filter works from a dynamic model that explicitly defines the relationship between inputs and outputs. Computationally, the neural network is actually quite simple - it's just a collection of numeric "weights" that determine how much of each signal from each layer is fed forward to the next layer. The trick is in establishing the weights through a "training" process, generally conducted before the aircraft ever leaves the ground.
I haven't found much literature on neural nets that's particularly helpful in providing a starting point in understanding - most discussions are conducted at the grad student / post-doc level. My approach was to start with a simple neural network software library and build some simple networks to see how things worked. There are some accessible libraries in Python, e.g. http://pyrorobotics.org/?page=PyroModuleNeuralNetworks. I can dig up some other links, or maybe Jack has some suggestions. In any case, it's an interesting technique that actually dates back to the 1950's, progressed significantly in the 80's, and recently has made a strong comeback as the core technology for most evolutionary robotics projects.
The 324 pixel mouse sensor to which he refers is an 18 x 18 pixel array that takes a picture at up to 1500 frames per second, and is the core of most optical mice. Some researchers have taken these close range sensors and added a long range lens. The mouse sensor is a $1 chip (e.g. Agilent ADNS-2610), and there's a nice DIY article on hacking an optical mouse to create one of these "optical flow" sensors for robots - http://home.roadrunner.com/~maccody/robotics/croms-1/croms-1.html
Here's a nice technical paper about the application of these sensors to UAVs - www.ee.byu.edu/faculty/beard/papers/preprints/BarberGriffithsMcLain...