TURN UP THE VOLUME!!! THE AUDIO IS QUIET!!!!

For the last few months I've been working on vision assisted landing for ArduCopter. The hope is to provide a means of landing a multirotor in a precise matter which is currently not attainable with GPS based landing supported by ArduCopter.

The development of this software was stimulated by Japan's recent effort to increase the use of UAV's for Search and Rescue. More can be read about that here!!! This sub-project of the S&R is being funded by Japan Drones, a 3DR retailer, and Enroute, also a 3DR retailer and a member of the DroneCode Foundation

This specific feature, precision land, is a very small part of the large project and is designed for Multitrotor recovery. The idea is to fly a Multirotor to a disaster zone, survey the land, and relay intel(such as pictures) back to a base station. The base station may be a couple of miles away from the disaster location so precious flight time, and ultimately battery, is used to fly the copter to and from the disaster location. Multirotors are not known for their lengthy flight times, so the more battery that can be conserved for surveying and not traveling is critical for a successful mission. 

That's where the precision land comes in. The idea is to station rovers, or unmanned ground vehicles, near the disaster location. These rovers will have a landing pad on top for a Multirotor. That way a Multirotor can use all of its battery to survey an area, land on top of a rover, and hitch a ride back to the base station on the rover.

The specifics:

Autopilot: Pixhawk with ArduCopter 3.2

Companion Computer: Odroid U3

Camera: Logitech c920

Vision algorithm: OpenCV canny edge detection, OpenCV ellipse Detector, my concentric circle algorithm(real simple)

Performance(on four cores): Process images at 30+ fps in good light and 10 fps in low light. Performance is limited by camera exposure not the Odroid's processing power!

The future:

1. I hope to have my first live test in a week or so. More testing needs to be done in the simulator to check all the edge cases and make the landing logic more robust.

2. Integrate the code more closely with ArduCopter code. Currently the companion computer takes control of the aircraft when it is in GUIDED mode. The hope is to have the companion computer take control in landing modes(RTL).

3. Check the performance on other companion computers: Intel Edison, BeagleBoneBlack, Raspberry Pi(maybe).

The code:

The code can be found on my github. Be cautious!

Thanks to Randy Mackay for helping me integrate the code with ArduCopter. 

Daniel Nugent

Views: 9326


Developer
Comment by Andrew Tridgell on January 9, 2015 at 8:31pm

Great work Daniel!I'm absolutely delighted to see SITL and DroneAPI being used for this sort of work. Hopefully we'll see a lot more things like this in future.


3D Robotics
Comment by Chris Anderson on January 9, 2015 at 9:38pm

Love this. Bravo!

Comment by Rob_Lefebvre on January 10, 2015 at 7:28am

Very cool!

Is is possible to use a single camera such as this one, to do optical flow and integrate into the EKF (which Paul is working on), and then use it for precision landing also?  I'm just wondering if we can get both feature out of a single camera system, or if we'd need two downward facing cameras.


Moderator
Comment by Vladimir "Lazy" Khudyakov on January 10, 2015 at 9:06am

Fantastic!


Moderator
Comment by Vladimir "Lazy" Khudyakov on January 10, 2015 at 9:08am

@Rob

When you a PAPI ona camera is enought.

Comment by Daniel Nugent on January 10, 2015 at 10:46am

@Tridge, SITL is a great tool. Definitely a copter saver!

@Rob, I am only familiar with OpenCV algorithms which are less complex than what Paul is doing. OpenCV does support Optical Flow but I have never used it and don't know what it is capable of. I don't think the PX4Flow is powerful enough to run the precision land vision algorithm, so the Optical Flow code would have to be ported to work on a companion computer and webcam.

Way in the future I could see the camera mounted on a gimbal and be capable of running multiple algorithms throughout a mission: Precision land, visual follow me, optical flow, SLAM, Randy's Red Balloon Finder, Tridge's OBC Joe finder, etc.

Comment by Jesus A on January 10, 2015 at 10:54am
Great work

@rob I would love to use just one camera for every case.
Maybe the information output by optflow could be generated by a custom library using opencv and a camera as the c920

Developer
Comment by John Arne Birkeland on January 10, 2015 at 12:14pm

Optical flow used for motion detection usually have very sensitive low resolution sensors with high frame rates. At normal video rates (30fps), high speed movements may cover a lot of pixels and introduce motion blur. And searching movement over larger areas will drastically increasing the workload and risk of degraded performance.

Comment by Julien Dubois on January 10, 2015 at 1:05pm

Great job!

What about using a bright target (with LEDs) to increase the low-light perfs? This way, you could maybe use a simpler shape.


MR60
Comment by Hugues on January 10, 2015 at 1:17pm

Congratulations for this outstanding progress! Let's hope we will see this incorporated in the main arducopter firmware very soon....

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service