3689691914?profile=original

Cameras have become an integral part of autonomous drones, helping them navigate as well as providing some application specific insights like an object of interest. Detecting a desired object in the camera view allows the drone to take decisions like following the object or orbiting around it. Such capabilities are useful in several applications from photo/video shoots to surveys and search and rescue missions.

The previous article (Part 1) had glimpse of onboard object tracking module in FlytOS and this article will delve into details on the same. It uses relatively simple OpenCV based algorithms to detect and track an object in the field of view using attributes like color and shape for detection along with Kalman Filter for tracking. It also has support for ROS based OpenTLD library which needs to be separately installed. Along with tracking, there are APIs to make the drone follow the object being tracked. It uses a PD controller and currently assumes a downward looking camera.

3689691821?profile=original

FlytVision is an onboard sample web app which provides an interface for ground devices to access onboard video streams and is now extended to include object tracking and following. It also demonstrates how seamlessly onboard image processing fits in the overall framework and allows for data plumbing with ground devices.

First step is to stream the processed images from the object tracking module. Then select the Detect/Track mode in the app. Currently available modes:

  1. Color: Uses HSV color, heuristics like change in distance and area of the object and Kalman Filter for tracking

  2. Circle: Uses HSV color, Hough circle for circle detection and Kalman Filter for tracking

  3. TLD: Uses ros_open_tld_3d library modified for integration with object tracking module

The object of interest can be selected on the video stream itself. Depending on the selected mode, corresponding attributes are detected and tracked in subsequent images. To follow the object, its distance from the image center is projected to ground and position setpoints are generated with a PD controller. The overall workflow:

3689691933?profile=originalThe onboard modules and web app are first tested in simulation using FlytSim. While the parameters may need to be tuned again for real drone, simulation helps in validating the algorithm and overall functionality before the drone even takes off.

3689691956?profile=original

Several params have been exposed from the onboard object tracking module so that they can be tuned from the ground app for a given setup. These include HSV color ranges, Hough circle params, TLD params, controller gains and options to turn attitude compensation, tracking and follow modes On and Off.

 

Besides params, custom data sharing is required for indicating the region of interest as selected by a user in the video stream. This is achieved by publishing a new topic in the app and subscribing the same in the onboard object tracking module. Whenever a user selects a region by drawing a rectangle in the video stream, the corresponding coordinates are published in this topic.

 

The Inspect section in the app shows object centroid position, drone’s position and the setpoints being sent. These data streams are obtained by subscribing to them using FlytOS websocket APIs. The object tracking features can be accessed in your own custom app using the object tracking FlytAPIs.

In the demo video above we used a  SJCam-4000 camera plugged into a FlytPOD and flew it on a hex-550 frame. The images are captured at 30 fps with 320 x 240 resolution. The onboard object tracking module ran at ~25 fps, had approx 75 % CPU core utilization and used color mode. The camera is rigidly attached to the frame without a gimbal. This requires the onboard attitude compensation for setpoint correction though the onboard video feed is still a bit shaky and smoother operation is possible with a gimbal.

 

We are exploring more capabilities to add like April tags recognition, gimbal control to keep object in the image center combined with follow and possibility of accelerated vision processing.

 

We are currently running FlytPOD Beta program. Sign up here - http://flytbase.com/flytpod/#beta

To learn more about the Flyt platform, please visit - www.flytbase.com

Documentation for developers - http://docs.flytbase.com

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Hi Francisco and Tiziano,

    Thanks a lot for your appreciation and apologies for completely missing to reply earlier. Open source can get tricky and we have internally raised and discussed this matter several times. Our current plan is to focus on giving enough power to developers through APIs and refining core layers with their feedback. We will do our best to see to it that developers' hands are not tied and are willing to take up suggestions/requests on case basis. This is still nascent phase and the strategy will evolve as things starts to mature.

    Hope this helps.

    cheers

    Pradeep 

  • Hi Patrick,

    Thanks for writing. Yes, we have successfully tested April tags recognition. Here is an initial demo - https://youtu.be/eSoAqfV91T0

    There is an onboard ROS package for April tags and also a sample web app that we used to test it. We have not yet formalized the APIs for April tags, within FlytOS. Thus, the package needs to be run separately and the sample web app shows how to access the video stream and how to set a param for selecting the tags family (video and param APIs use FlytOS framework). If this interests you, I can make the onboard package and sample app available.

    cheers 

  • Hello Pradeep,

    'We are exploring more capabilities to add like April tags recognition'' ,  

    Have you any progress on this ?

  • Really like it. I agree with Francisco: open source

  • You guys are doing great work. But you should consider open sourcing the FlytOS, it would allow more people to experiment with (and contribute to) the low level stuff instead of just using the high-level APIs.

This reply was deleted.