Object Tracking - Working on Mission Planner Integration


I am currently working on integrating an object tracking algorithm of my own making into the Mission Planner. The algorithm combines Lucas-Kanade and SURF and attempts to learn negative and positive reinforcement values as it runs. Right now it is in the early stages of development, but I hope to have something fully functional within a few months.

 

My hope is to use this to aid in automatic centering of the tracked object by sending commands to the APM to move the camera gimbal based on the distance of the tracked object from the center of the frame. In the videos below you will see two demonstrations of this tracking algorithm. In both cases it seems to perform well with background noise.

 

NOTE: I apologize for the terrible frame rate of the videos. I don't have very good screen capture software. The second video is a little higher quality.


Views: 4734

Comment by Dimitri on October 11, 2011 at 8:50am

I think it will better to integrate it in qgroundcontrol than in apm ;)


Developer
Comment by Adam Rivera on October 11, 2011 at 8:59am

Dimitri,

Unfortunately I have no experience with QGroundControl. Perhaps someone else can port my work to their GCS.

Thanks,

Adam

Comment by Dimitri on October 11, 2011 at 9:03am

But your work look very complex, it's impressive ! I'm not sure but i think i've seen same things done with openCL library (but not for drones)


Developer
Comment by Adam Rivera on October 11, 2011 at 9:21am

Dimitri: This work is based on the OpenCV library. What I have done is taken a few existing algorithms and thrown a layer on top to improve accuracy and speed. There isn't one single algorithm that functions perfectly on its own. I have added:

1. Multi-threading for performance.

2. GPU processing for performance.

3. Several mathematical computations to improve accuracy.

4. Learning (negative and positive).

5. Logic to deal with occlusion based on the negative and positive learning.

6. Configurable accuracy threshold

7. Re-acquisition of target when it is out of frame or if we have lost certainty.

 

As I stated above, it uses the Lucas-Kanade algorithm for optical flow tracking and SURF for object acquisition based on a feature set. The way in which I have improved and integrated those two with learning is what makes this unique.

Comment by Ben Schwehn on October 11, 2011 at 10:28am

Cool!

Some questions:

Where do you envision the tracking/recognition software to run? Lift a stripped-down notebook class computer (you mention GPU processing) with the copter? Or via a groundstation? If the latter, how well do the tracking and recogintion algorithms cope with the glitches and interferences in video recieved via your typical FPV Camera/Rx/Tx setup?


Developer
Comment by Adam Rivera on October 11, 2011 at 11:07am

Ben:

  1. The software will run on one's ground station computer. If the machine you are using as a ground station (running Mission Planner) does not support multi-threading or GPU processing, it will revert to a single-thread, no GPU process. 
  2. Any interference in the image could potentially degrade the accuracy of the tracking algorithm, but it is designed to handle such interference. A good example of that can be seen above in the second video. Notice that the helicopter being tracked flies behind some heavy (noisy) brush cover. The software is able to detect the tracked object within a certain threshold of noise. Also, the learning aspect of the software will begin to interpret noise in the frame as positive and negative features, essentially ignoring it.

That being said, it is not perfect as you can see from the video. There will be times when it loses "sight" of the object. The above test footage I used was more of a worst case scenario with a fast moving object in a high noise environment. If you happen to have some FPV footage on hand that you can send to me, I would be happy to use it as a test and post the results up here.

Thanks,

Adam

Comment by Ben Schwehn on October 11, 2011 at 11:30am

Hi Adam,

thanks for the quick reply.

Sounds like a very nice project, I wish you the best of success :)

If this works out, I think it will be just awesome for aerial videography...

 

No, I don't have any FPV footage (yet...). I was just wondering if you already tested how well the algorithms cope under those special circumstances.

Ben

 


Developer
Comment by Adam Rivera on October 11, 2011 at 11:46am

Ben: No problem and thanks for the well wishes! I just downloaded a new video that was just posted here. I am going to put together a more realistic usage video. I'll post again when I have that finished. Good luck with your FPV setup.

Comment by Yusuf Pirgali on October 11, 2011 at 11:55am

Adam, i think this is great, as i have been looking for someone to either use the OPENTLD software or use their own. As i would like to also track a flying device and use it to move the filming camera at the object, for self filming and antenna following. So if you have not seen the OPENTLD then go here..https://github.com/zk00006/OpenTLD/wiki

Comment by Ben Schwehn on October 11, 2011 at 12:00pm

The problem with that video (and most posted FPV videos) is that (I think) it does not ncontain the actual video source as received in real-time on the ground but instead the on-board recording, which is not only in higher definition but also does not include any of the glitches associated with the radio video transmission you'd have to work with in real-time...

 

Then again, looking at some of trappy's footage (e.g. http://www.youtube.com/watch?v=6_kmUPmFXCw), there aren't necessarily as many glitches as I originally expected. You can tell it's lower resolution and recoreded via a lower quaility lens/camera but the noise isn't too bad at all. So perhaps (and I sure hope so) this is not a big issue at all.

 

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service