Vision based positioning system for Drones

image.php?id=nvidia_jetson_tk1&image=nvidia_tk1_jetson2_med

Every drone is uses a GPS ranging from position hold to way point navigation. But the performance of current GPS system has massive scope of improvement. Imagine a drone which can hold its position so well that its performance is comparable to a motion capture system's position hold and likewise for the way point navigation. 

I have implemented a monocular visual odometry on Nvidia's JetsonTK1. Using this implementation, GPS can be replaced with this camera based system. I dint have the resource to implement this on an actual quadrotor but I am quite confident that visual odometry can outperform GPS system.

Here is the blog entry for the implementation. 

http://www.robnsngh.com/2014/11/monocular-visual-odometry-using-nvidia-jetson-tk1-for-uavs/

Check out the actual implementation video from the creators. 

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @robin_singh Hi, is the blog post still available? It seems the link directs us to a wrong website. Thanks! 

  • Hi Robin,

    Got to say i am very impressed with this work. Just to introduce myself, i am the technical manager of Remote Aerial Surveys and we have some existing applications where a system like this would be extremely useful. One such example is for inspecting the underside of the Humber bridge where GPS is unreliable. 

    I will PM you my contact details, and maybe we could have a chat about moving this from a research tool to a commercial application

    Kind Regards

    James

  • Thank you, I didn't realise you were working for the PX4 dev team.

  • Developer

    Hi guys.

    Here is the link to the PX4 wiki page. Let me know if anything isn't clear : https://pixhawk.org/dev/ros/visual_estimation

    And a start page for those looking into ROS development : https://pixhawk.org/dev/ros/start . @Gary you can do this right now. Check out our pages ;)

    This is the opensource package which Robin's using : https://github.com/uzh-rpg/rpg_svo/

  • Really excellent implementation Robin,

    I am very impressed with number of realtime data points and precision of response.

    I think this is very much at the leading edge and appears to be reasonable to implement.

    Can't wait to use it myself.

    Best Regards,

    Gary

  • @ Kabir, can you post a link to your wiki? Couldn't find it with google.

  • Developer

    @Nicholas, This isn't optical flow. While a KLT algorithm is used for the initial triangulation, rest of the tracking is direct and "active", i.e uses key-frames and alignment to estimate camera egomotion. The semi-direct method makes it highly efficient and we have run more than 200 Hz visual updates on onboard consumer computers (Core i5) and upto 60Hz on a Odroid U3. This isn't possible with a optical flow algorithm working on the same scale.

    Optical flow by comparison is a passive system which compares consecutive frames (no dependency on tracking the same features over a period) Feature extraction and matching is computationally expensive, and thus it is slower.

  • Developer

    Hi,

    We already fly this system on the PX4 flight stack. Its called Semi-direct monocular Visual Odometry (SVO) 

    Its fairly easy to setup, and I even have a tutorial for interested people at our wiki. Testers welcome!!

    Do you actually take advantage of the CUDA cores to accelerate the feature alignment? In that case, it'd be an interesting exercise. Otherwise, not much point of running in on a Tegra K1 when you can do it on a Odroid for $60 ;)  SVO is a highly efficient algorithm, and can achieve good trackrates on just the TK1's ARM core, so what did you exactly "implement" here?

    A couple of questions / suggestions : 

    1. How are you proposing to recover scale from the monocular system for vehicle control?

    2. As you said, the C920 isn't very good for this purpose, but you can go fairly far with cheap webcams if you hack a wide angle lens onto it. Matching doesn't work well for DFOV < 90 degrees. 

    3. And you mention in your blog about the model in the visualizer, but its not a URDF ;) Its a hexa model created from circle markers, etc. as found in RViz. Originally from libsfly_viz. 

  • Moderator

    Have a look at OptiPilot from back in the wayback days of 2010 

    I was always impressed by the tree pull up. That's now a company owned by Parrot.

  • Isn't this what optical flow does.
This reply was deleted.