In the quest for daylight flying, it's our 1st attempt at detecting the position of an aircraft using the SURF implementation in OpenCV. It's a long way from detecting something flying in a room with lots of background detail & even with a noise free background it has a hard time with accuracy. The single 3.7Ghz core does 8fps.

This is the algorithm used by every augmented reality app ever made. It was invented in 2006.

Another problem with aircraft tracking is it's a 3D object.  All the demos of SURF show it tracking planar objects.  It could probably track the Marcy 1 logo, but not the complete aircraft.

Noticed SURF doesn't have any transparency support, so it requires either a rectangular object to search for or no background detail.

SURF isn't the answer we were hoping would enable daylight flying.   You're still limited to old fashioned keying.  Difference keying is still the most effective at separating the copter from the background, but the camera can't move.
There are brute force techniques.  They would require a cascaded search.  For each resolution, draw every possible ellipse & take the best match.  That wouldn't handle rotated ellipses, but they only occur during takeoff.  The camera could stay still during takeoff & use difference keying.
There's the hough transform for ellipse detection, which seems to be just another brute force search.
There's hybrid difference keying.  Make the camera stop moving every few frames to recompute the difference key.
  It's not very obvious why the thing needs to fly in daylight.  Luma keying in darkness is basically the poor man's Vicon & they in their infinite budget use IR cameras to remove the background.
 

Finally, we have some footage of what Marcy 1 looks like in the flying space.  Handheld & wide angle gives new possibilities, if not good quality.

Views: 1884

Comment by James masterman on February 12, 2012 at 4:52am

It's a shame that SURF doesn't cut it in Open CV. There are others like the excellent TLD project from here that may be worth a try. That one does require MatLab, though.

On a related note, I've always thought that vision based navigation of UAV's was a natural evolution for survelance based work. To be able to track what you see and have the plane automatically follow would be great. There are some products in the civilian space like this one, but scope for much more. It would be great to see it in APM one day!

Comment by penpen on February 12, 2012 at 5:43am

SLAM


Moderator
Comment by Alex on February 12, 2012 at 7:33am

Have you tried using a camshift based tracker?


Developer
Comment by Adam Rivera on February 12, 2012 at 10:02am

@Jack: I have spend a lot of time on an object tracking algorithm that shares some of the same features seen in the OpenTLD project. See my blog posts:

http://diydrones.com/profiles/blogs/object-tracking-proof-of-concept

http://diydrones.com/profiles/blogs/object-tracking-working-on-miss...

Once I have integrated this into the mission planner you may be able to leverage it.

Comment by Jack Crossfire on February 12, 2012 at 3:53pm

It seems if separating the object from the background was easy, Vicon would already be doing it.

Comment by leonardo.bueno on February 12, 2012 at 5:35pm

If your camera is fixed and your background static, as it seems to be the case, background subtraction with opencv is quite easy.

Comment by Bill King on February 13, 2012 at 6:03pm

Why not try microsoft kinect plus openni?

Comment by John Wiseman on February 13, 2012 at 6:41pm

The usual solution to using SURF to detect and track a 3D object from different angles is to create multiple models of the object from different angles.

For example, place the aircraft on a turntable and take 12 photos from 0-360 degrees, and maybe take a couple from the top and bottom perspectives.

I'm not sure you're going to have enough features to track your aircraft reliably, though--in the first video it looks like the high contrast logo is seen pretty well but how many features are you getting in the rest of it?

Could stereo tracking work?

Comment by John Wiseman on February 13, 2012 at 6:44pm

One technique I've used to speed up feature extraction and matching is to run optical flow over the images and black out any regions that don't show motion.  If your camera can move, you'd have to do something slightly fancier--like segmenting the image using optical flow, separating background movement from foreground movement.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2014   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service