Stanford computer scientists have developed an artificial intelligence system that enables robotic helicopters to teach themselves to fly difficult stunts by watching other helicopters perform the same maneuvers. The result is an autonomous helicopter than can perform a complete airshow of complex tricks on its own.
Much more in the full article here.
They are using onboard inertial sensors. The news stories are somewhat misleading in describing the training process as "watching". There's no vision system component - it's all done by capturing the experienced operator's commands and correlating this with feedback from onboard sensors for each specific maneuver. I'm not minimizing the accomplishment - they have tackled a very complicated control problem with impressive results. However, it is unclear how far along in complete dynamic control, e.g. plotting a flight path using a 3D simulation model and feeding that directly into vehicle control, though certainly that has to be the ultimate goal.
Hmm,
I curious if anybody here will try something similar.Did they just used the prerecorded transmiter comands and create a statistical model ,or they also include the onboard sensors(gyro,GPS) ?
Very impressive! Although I don't think they used a Basic Stamp for this, but they did mention that some of the calculations were done on the ground and commands transmitted wirelessly at 20Hz to the helicopter.
Comments
I curious if anybody here will try something similar.Did they just used the prerecorded transmiter comands and create a statistical model ,or they also include the onboard sensors(gyro,GPS) ?