In this video the application of a Visual-Inertial (Camera & IMU) localization algorithm on a Parrot AR.DRONE quadrotor, for enabling real-time GPS-denied autonomous navigation, is presented.

A sensor package for data logging, consisting of a Beaglebone computer, an Intersense Navchip IMU and a Point Grey Chameleon monochrome camera was attached to the AR.DRONE.

The IMU operates at 100Hz while the camera at 7.5Hz. Currently, IMU measurements and camera images are logged on-board and they are processed off-board.

Details

The miniaturization, reduced cost, and increased accuracy of cameras and inertial measurement units (IMU) makes them ideal sensors for determining the 3D position and attitude of vehicles navigating in GPS-denied areas.

In particular, fast and highly dynamic motions can be precisely estimated over short periods of time by fusing rotational velocity and linear acceleration measurements provided by the IMU’s gyroscopes and accelerometers, respectively.

On the other hand, errors caused due to the integration of the bias and noise in the inertial measurements can be significantly reduced by processing observations to point features detected in camera images in what is known as a vision-aided inertial navigation system (V-INS).

In this video, we present the latest demonstration of a V-INS implementation, that has computation complexity linear in the number of features tracked from the camera. The specific implementation is targeted on enabling quadrotors to navigate with high precision in GPS-denied areas.

Relevant Publications

[1] A.I. Mourikis, N. Trawny, S.I. Roumeliotis, A. Johnson, A. Ansar, L. Matthies: “Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing,” IEEE Transactions on Robotics, 25(2), pp. 264-280, April 2009.

[2] J. A. Hesch, D. G. Kottas, S. L. Bowman and S. I. Roumeliotis "Towards Consistent Vision-aided Inertial Navigation",  10th International Workshop on the Algorithmic Foundations of Robotics 13-15 June 2012, Cambridge, Massachusetts, USA

[3] D. G. Kottas, J. A. Hesch, S. L. Bowman and S. I. Roumeliotis "On the consistency of Vision-aided Inertial Navigation", 13th International Symposium on Experimental Robotics June 17-20 2012, Quebec City, Canada

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Hi

    This is Jerry. I am very appriciate on your excellent result. And I also need to do the same work on my Ardrone2.0. May I ask what if I use its intrinsic IMU ? (Maybe it is MPU6050,I am not sure about it.) And I also need some technical suggestion. Have you had a paper on this ?

  • Hi,

    This is really very good work on Vision aided Inertial Navigation,I need some help on this,can you please suggest me

  • Hi Maxime, Looked at the RT-SLAM web page you referenced and that is certainly the direction I am thinking in, thank you.

    Basic thought is that with the right mix of sensors you can establish significant points and possibly map significant "corridors" and "openings" for autonomous navigation with real time update and correction. 

    A Nav map for an autonomous multicopter is a lot different (and probably a lot simpler) than a picture.

    Basically right now I am looking at what hardware is most useful to produce a navigational matrix that requires the least computational and data intensity for the necessary end result.

    3D IR TOF cameras seem to provide one of the most promising data acquisition devices and the forthcoming PrimeSense "Capri" may be a good fit or possibly even a "Leap".

    In the end though it is going to be about providing the right mix of sensors that can let you effectively reduce the computational overhead and data set to manageable levels.

  • Excellent results guys,

    I notice that although the Beagle Bone is definitely a reasonable hobby solution the Intersense chip is a bit pricey at $1500.00 (and it shows as being "End of Lifed" at Intersense).

    I am wondering if some of the current common and cheap accelerometers and gyros have sufficient resolution and sensitivity to replace the expensive and no longer available Intersense chip (say like the ones on the APM 2.5).

    And they don't exactly giveaway the Point Grey cameras either.

    I take it you chose the monochrome version to do easier edge matching than would have been possible with color. Seems like you could have used a color to mono conversion program to have closely approximated that result and then could have perhaps used a more commonly available and cheaper camera.

    Not criticizing at all here, this is a brilliant result just trying to figure out how we might make use of it.

    I personally am intensely interested in doing GPS free interior navigation relying on obstacle detection and avoidance and by significant point mapping and you guys have basically covered a major portion of that.

  • I think this some kind of mapping and tracking algorithm based on landmark and IMU to more accurate data. Vision system based on landmark, like this :

    http://www.openrobots.org/wiki/rtslam/
  • In short, the demonstrated system can track in realtime and with high-accuracy the position, attitude (orientation), velocity and biases (gyroscope's & accelerometer's) just by using two very light-weight sensors, an IMU & a camera, (that's why they fit on the ARDRONE, which has tiny payload capabilities).

    This means our system is independent of positioning systems like a VICON or a GPS and uses only on-board sensing while it is in an indoor and unknown environment.

  • I'm a little confused on what's actually happening in the video. 

    Is someone directing the drone around the hallways and you are presenting the data recorded by the IMU and the Camera? 

    Is the end goal to create a system that can follow a prescribed route, using the camera to assist in collision detection?

    Either way, good job on getting it all tied in together. 

    I'm off to see if I can hunt down your refs.

    Thanks,

    Phill

This reply was deleted.