Hello everyone! It’s been a while since I write a blog post here!

In this post I will explain about a very cool robotics project I did last year. This project is as cool as it sounds: making a drone follow a roomba using computer vision!

This has been done before and for sure someone has posted their on solution here, I will just describe my version.

The idea can be broken down in different sections, one the vehicle, the communications, the control, computer vision and results…

The objective of this project is to make a drone “see” a roomba and try to follow it from the air.

On the vehicle side, I was using the great and cool BigX, which is a great vehicle with very nice performance and big!! Here is a pic of it:

On board I was using the same concept of the Flight Stack, which is the combination of a flight controller (pixhack in this case) and a raspberry pi with extra features.

The RPI was a 3, and its the one in charge of “piloting” the aircraft when I’m not doing it. Also the RPI is the one that runs the computer vision algorithm to be able to “see” the roomba. With that information (position in pixels X and Y) the RPI will compute the necessary velocity commands to steer the vehicle to the center of the target.

The RPI was also in charge of the communications, it created a network and on the ground I was using a ubiquiti AP that was connected via ethernet to my MBPR, I used this configuration because such AP gave me the range of 10km LoS (I just try it up to 3km…)

Also on board, connected to the RPI, a B101 HDMI bridge was used in order to capture the frames from the gopro camera and have them analyzed.

On the ground side as I mention before, I had the Ubiquiti AP and my laptop connected with ethernet to it. My computer logged in to the RPI via SSH in order to activate the scripts that run the main stuff. Also I had QgroundControl opened to be able to see the telemetry of the vehicle in a nice way. I was using mavproxy with udp casts. An image of how my computer screen looked was:

In the image above you can see the position teleoperation tool from the AltaX ground station program. This module changed the position of the robot by reading the keyboard from the ground station computer, pretty neat…

On the computer vision part, I added blue tape to the top of the roomba in order to be very easy distinguishable from the environment. I also tuned as much as possible my different color tracker algorithms, you can find the code here and a video demonstrating here.

When you combine all ingredients, plus creating a velocity vector position control, you got a nice result, like the one showed in the video showed at the beginning. 

Let me know what you think, in this case I cannot release the as its protected by my employer but as usual I can guide or make recommendations to anyone doing something similar.

Don't forget to fly safe!

Views: 780


3D Robotics
Comment by Chris Anderson on December 26, 2017 at 3:21pm

So impressive! What code were you running on the Pixhawk? 

Comment by Aldo Vargas on December 26, 2017 at 3:35pm

Hi Chris, thanks a lot, I was running APM, cheers!

Comment by Patrick Poirier on December 26, 2017 at 5:13pm

Hello Aldo, good to see you are still doing great demos !!

Are you using precision loiter and controlling quad using the Mavlink LANDING_TARGET message ?

Regards

Comment by Aldo Vargas on December 26, 2017 at 5:39pm

Hi Patrick, I'm calculating the necessary velocities using a controller and then sending them to the pixhawk in the local ned frame, I did it this way because is faster, my end goal was not to follow a slow roomba, but it was to track something more fast... :D

Cheers Pat!

Comment by Patrick Poirier on December 26, 2017 at 5:42pm

Ohhh that is interesting, if its possible to show part of this work , I'd be delighted ;-)

Cheers

Comment by earthpatrol on December 27, 2017 at 3:46pm

For those that want to develop their own version of this, Open CV is your friend. Literally, an afternoon of coding, if you have all the hardware. No need for deep learning either if you're simply using unique shape/color objects to detect a target.

Comment by Aldo Vargas on December 27, 2017 at 3:50pm

yes, I'm using opencv and that code is open source, you can see it here: https://github.com/alduxvm/rpi-opencv

Cheers!

Comment by Glenn Hollowell on December 27, 2017 at 10:55pm

Ooooh, nice! Thanks for sharing this. I'm impressed w/the code as well as the Big X platform. It is a nice big technical drone like I wish we'd see more of in the market. Best regards and happy holidays to you and your team!

Comment by Tiziano Fiorenzani on January 2, 2018 at 8:59am

That is a really great job!

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2018   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service