Unlike previous work, our system does NOT rely on external motion capture to localize and control the vehicle. All sensing and computation required for local position control is done onboard the vehicle.
We have collaborated with members of the Robotics and State Estimation Lab at University of Washington to use their RGBD-SLAM algorithms to perform simultaneous localization and mapping (SLAM) and build metrically accurate and visually pleasing models of the environment through which the MAV has flown.
More info can be found here:
http://groups.csail.mit.edu/rrg/index.php?n=Main.VisualOdometryForGPS-DeniedFlight
Comments
Seriously, the videos from the two groups using external movie studio quality motion tracking are passed around the office now to get a good chuckle from my co-workers. It must be fun for them to live in a dream world of perfect localization, but zero practicality in real life.