Hey, who out there is coding AI?My basic assumption is that AI programming today is, at best, a chess program. What I mean is that everything is based off of pre-existing data and a decision tree that handles each situation. In a chess program that means that there is most likely a database of some sort that chooses a move based upon its likely hood of leading to a check mate. Maybe the chess program is capable of assessing which move it should make based upon the chess rating of the player it is up against, but that still is developed by previous experience (data logging) with that user or preprogrammed difficulty levels and the result is nothing more than a secondary decision tree that helps narrow down the decision the chess program makes.The essence of AI is a decision based upon known data.Pretend that we have a project with the mission of flying from Seattle to Mount Rushmore. Even if I take my basic assumption about AI and apply it to what is taking place inside a UAV, decisions are based on the constant flow of data from the sensors and GPS. If the UAV’s AI is designed to find lift then all the AI is doing is recognizing that condition, which is preprogrammed, and executing a sub routine that takes advantage of that situation and modifying its flight plan accordingly, as long as it ends up at Mount Rushmore. From the perspective of the computer, even the sensor input is pre-existing as a kind of analog database that it compares to conditions that it can utilize. What the AI should do in a particular situation is predefined based on recognizing the conditions in the environment. If the AI encounters an unknown it will have to resort to a program that is designed to handle what it has not experienced before, which is going to require data logging and Boolean logic “learning” to still achieve the goal of getting to Mount Rushmore.Suppose the condition the AI has never encountered before is a hurricane. It may be that the AI collected more data, data that it had no idea existed before, but to do that, the AI had to be pointed in the direction of collecting the data in the first place. Whatever response the AI then makes with this new data is going to come from a program that tells it to collect a certain kind of data about itself as it strives to achieve the goal of getting to Mount Rushmore. But the essence of AI is still a decision tree; it is just that the decision tree is being filtered through the predefined expectation of what gets it to Mount Rushmore.Basically:If this is getting me to Mount Rushmore ThenKeep doing it = trueEnd IfWe still have a decision tree that works on predefined data. The only difference is that we have a program that tells the AI how to build the decision tree based on the positive choices that achieve the goal.The decision tree being built would really be based on a template and that is why AI is essentially a decision based upon known data, even data that it has not seen before, because even the unknown is going to be made to conform to something that is known.Here is a philosophy of where I would like to take AI in the future.I think the future of AI will be made using chaos theory and fractals. For instance, snowflakes are practically infinite in their appearance but they are still snowflakes. AI intelligence based on a pure decision tree may be able to handle this fact, but the process of recognizing each individual snowflake will take more time than it’s worth to figure out. Wind moves in predicable patterns and the predictability is not much different than the fact that a snowflake is still a snowflake even though each, individually, is radically different.Of course, this notion of using chaos theory is nothing new. In a world that is increasingly using biometrics, the idea of using the math that matches the fractal for something like facial recognition is becoming increasingly more common. This recognition then would allow a robot, perhaps, to go into a subroutine that governs its protocol around a human being.In the world of UAVs, and for that matter robotics in general, the ability to use fractals could create the ability to forecast conditions before they happen. For instance, instead of having a situation where the decision tree is being created on the linear input of the sensors, the AI would anticipate its own unknowns.A good example of what I mean would be eddies of the wind. In order to survive a hurricane the AI is able to anticipate where they are and plan a flight path that tries to avoid the most powerful swirls. In a one knot breeze, the AI can choose to ignore them as unimportant and stop forecasting these unknowable events. Somewhere in-between these two wind factors the AI might choose to start forecasting new data for itself.Another way this might be useful is recognizing a geographical condition, such a mountain, and projecting how the wind may act in that situation. A factor it might consider then is the abandonment of a current lift scenario in favor of taking advantage of the lift being provided near that mountain (or hill for that matter).There is no escaping a decision tree. They are a necessary function, but they can become more of the conscious part of the AI while the abstract of fractals can become more of a subconscious that is able to bring the abstract into focus.In fact, in this subconscious, the AI can run simulations of what events might take place if it chooses a certain path and have a preplanned behavior for venturing off into the unknown.
A certain group of my friends are working on AI projects for various agencies. The Navy has a sick one that is pretty crazy. At best, with my cameras, i want it to have limited intelligence, such as follow this, obstacle avoidance, auto landing, and path finding.
What you seem to want is a learning, evolutionary based programming, where the computer "learns" or adapts through experiences. If you can give the computer enough input to make a great decision, such as weather information, then you almost don't need to teach it- you can simply program it to respond to different wind patterns and currents. Unless you're dogfighting, doing bombing runs, or something, where variables go from just erratic to needing spur of the moment decision making, there's nothing much more challenging up in the sky <400ft AGL than rough weather. :).
I guess my question to you is "what are you trying to do?"
*and for the record, AI, as i've been hearing it, is going a good bit beyond a glorified chess program :)*
Research that I've found particularly interesting is called "embodied intelligence", described by researchers such as Josh Bongard, Hod Lipson, Rolf Pfeifer and others - where the concept is to build resilient machines through continuous self-modeling. Their work is primarily with ground-based machines (e.g. multi-leg robots), as the dynamics limited to the force of gravity are considerably less complex to compute than aerodynamics. Essentially, the concept is that the robot (or whatever) carries an internal model of its dynamics and then through use of genetic algorithms and simulation and new dynamic models to deal with the loss of a limb or change of operating conditions. These computations are generally performed offline, but a sufficiently powerful onboard computer could enable this internally. I believe Hod Lipson demonstrated some of this at a recent TED.
What your describing is called 'partial plan' autonomy, and yes, outside the lab where we're building systems to interact with the real world this is pretty much state of the art for systems that actually have to work autonomously, first time and every time, like a UAV. Personally I think the most interesting research in autonomy right now is in emerging complexity, where you have a number of interoperating rule based behaviors which, combined together, produces a much more complex behaviour that moves the entire system towards the desired goal. Unfortunately such complex behaviors aren't entirely predicable, and with real world situations this is often a downside.
Applying choas theory would be useful because you could use the formula that explores that "butterfly effect" and escape the need of direct sensor input. Back to the wind eddies, the AI would use the math to anticipate the kind of eddies that might be dangerous for its survival before encountering them. In this way the AI could already be "thinking" up strategies based upon the simulation of what might happen.
And its at this point that traditional AI programming, meaning how AI now approaches making a decision, would lapse into something like what dincer is talking about, but it would already be prepared for the situation because it has already been "thinking" about what might take place. I think the result could be really close to what would needs to take place in reality, only it would be far faster and precise that current methods
i once worked on some sort of robotic AI algorithm called Q-Learning which i think can be applied to an UAV also in this concept. it is based on collecting data from sensors, logging and interpreting this dataflow on a matrix with the aim of maximizing 'reward', i.e. achieving a goal. in our UAV case this might be reaching a predetermined location. the reward will be 'minimizing the distance to the target' the UAV will take necessarry action to reach this goal, collect data from sensors, log that data and make comparisons on the way to select the best route to achieve reward. if enough data is collected and reasonable pre-met conditions are logged it can omit non-necessary or useless conditions and learn to use only successful actions to achive the goal...
something like that, basically... i am just brainstorming...
Hmm, sounds very interesting, but I'm not sure I understand exactly how you would apply chaos theory to an autopilot. Granted, I haven't studied chaos theory very much, but my understanding of it is the "butterfly effect": the fact that an extremely small change in a system can cause a very large one down the line. Also, my understanding of fractals is that they are shapes/systems that have essentially the same structure at all levels of magnification. These are probably incomplete definitions, so correct me if I'm wrong.
So taking your wind example...how would that theoretically work (assuming we had the sensors, processing power, etc. that we needed)? The AI would get a profile of the wind swirls around the plane, and use those to infer things about the larger system? Or to predict the future of the system? Or to "learn" about how swirls in general works, so that it knows in the future how to navigate them?
absolutely Chris, but I just wanted to put a philosophy of thought out there for how it might be possible to create something like intuition in AI. Statistics present a computer with the ability to choose based on the highest possibility of success. Bayesian analysis is collecting data, or evidence, that either supports or denies a given theory, or in this case a decision that is useful for reaching a goal. In a very loose sense, this still becomes a Boolean choice by the AI. For instance, statistics show that under condition A there is a high probability for success in achieving a goal, so everytime condition A happens the AI will already know what to do.
In the case of the AI finding itself in an unknown condition, it is possible to have it explore its options. Eventually there will be enough data collected for the AI to know what the statistics state about each of those options and what it should do if it encounters the situation again. It is at that point that the decision making skills of AI become Boolean, or atleast a logic comparison of some sort.
In fact, I don't really expect to ever truly escape this level of "thinking" by a computer. Cause and effect are great "learning" tools, but how cause and effect is approached could change how AI operates. If an AI intelligence is analyzing cause and effect in a simulation then it can approach an entirely new enviroment with a preconceived notion of what the highest probability of statistical success might be. In some enviroments having this notion in hand before you get there might make all the difference.
The internet search engine, online travel agencies, newegg.com, amazon.com all use artificial intelligence of a sort. It can't emulate the human brain but the mechanics are understood enough that great leaps can be a matter of just adding more clockcycles.
Most of the interesting work in robot AI is being done with cars, such as in the DARPA road race. They don't use chaos theory or fractals, as far as I know, but they do use Bayesian analysis and other statistical methods rather than just Boolean trees. And even there, most of that is in image recognition and sensor analysis.
I think we're still pretty far from what anyone might call a "smart" UAV, or anything much smarter than the autopilots in commercial jets.
Comments
What you seem to want is a learning, evolutionary based programming, where the computer "learns" or adapts through experiences. If you can give the computer enough input to make a great decision, such as weather information, then you almost don't need to teach it- you can simply program it to respond to different wind patterns and currents. Unless you're dogfighting, doing bombing runs, or something, where variables go from just erratic to needing spur of the moment decision making, there's nothing much more challenging up in the sky <400ft AGL than rough weather. :).
I guess my question to you is "what are you trying to do?"
*and for the record, AI, as i've been hearing it, is going a good bit beyond a glorified chess program :)*
Applying choas theory would be useful because you could use the formula that explores that "butterfly effect" and escape the need of direct sensor input. Back to the wind eddies, the AI would use the math to anticipate the kind of eddies that might be dangerous for its survival before encountering them. In this way the AI could already be "thinking" up strategies based upon the simulation of what might happen.
And its at this point that traditional AI programming, meaning how AI now approaches making a decision, would lapse into something like what dincer is talking about, but it would already be prepared for the situation because it has already been "thinking" about what might take place. I think the result could be really close to what would needs to take place in reality, only it would be far faster and precise that current methods
something like that, basically... i am just brainstorming...
So taking your wind example...how would that theoretically work (assuming we had the sensors, processing power, etc. that we needed)? The AI would get a profile of the wind swirls around the plane, and use those to infer things about the larger system? Or to predict the future of the system? Or to "learn" about how swirls in general works, so that it knows in the future how to navigate them?
In the case of the AI finding itself in an unknown condition, it is possible to have it explore its options. Eventually there will be enough data collected for the AI to know what the statistics state about each of those options and what it should do if it encounters the situation again. It is at that point that the decision making skills of AI become Boolean, or atleast a logic comparison of some sort.
In fact, I don't really expect to ever truly escape this level of "thinking" by a computer. Cause and effect are great "learning" tools, but how cause and effect is approached could change how AI operates. If an AI intelligence is analyzing cause and effect in a simulation then it can approach an entirely new enviroment with a preconceived notion of what the highest probability of statistical success might be. In some enviroments having this notion in hand before you get there might make all the difference.
I think we're still pretty far from what anyone might call a "smart" UAV, or anything much smarter than the autopilots in commercial jets.