Precision Land ArduCopter Demo

TURN UP THE VOLUME!!! THE AUDIO IS QUIET!!!!

For the last few months I've been working on vision assisted landing for ArduCopter. The hope is to provide a means of landing a multirotor in a precise matter which is currently not attainable with GPS based landing supported by ArduCopter.

The development of this software was stimulated by Japan's recent effort to increase the use of UAV's for Search and Rescue. More can be read about that here!!! This sub-project of the S&R is being funded by Japan Drones, a 3DR retailer, and Enroute, also a 3DR retailer and a member of the DroneCode Foundation

This specific feature, precision land, is a very small part of the large project and is designed for Multitrotor recovery. The idea is to fly a Multirotor to a disaster zone, survey the land, and relay intel(such as pictures) back to a base station. The base station may be a couple of miles away from the disaster location so precious flight time, and ultimately battery, is used to fly the copter to and from the disaster location. Multirotors are not known for their lengthy flight times, so the more battery that can be conserved for surveying and not traveling is critical for a successful mission. 

That's where the precision land comes in. The idea is to station rovers, or unmanned ground vehicles, near the disaster location. These rovers will have a landing pad on top for a Multirotor. That way a Multirotor can use all of its battery to survey an area, land on top of a rover, and hitch a ride back to the base station on the rover.

The specifics:

Autopilot: Pixhawk with ArduCopter 3.2

Companion Computer: Odroid U3

Camera: Logitech c920

Vision algorithm: OpenCV canny edge detection, OpenCV ellipse Detector, my concentric circle algorithm(real simple)

Performance(on four cores): Process images at 30+ fps in good light and 10 fps in low light. Performance is limited by camera exposure not the Odroid's processing power!

The future:

1. I hope to have my first live test in a week or so. More testing needs to be done in the simulator to check all the edge cases and make the landing logic more robust.

2. Integrate the code more closely with ArduCopter code. Currently the companion computer takes control of the aircraft when it is in GUIDED mode. The hope is to have the companion computer take control in landing modes(RTL).

3. Check the performance on other companion computers: Intel Edison, BeagleBoneBlack, Raspberry Pi(maybe).

The code:

The code can be found on my github. Be cautious!

Thanks to Randy Mackay for helping me integrate the code with ArduCopter. 

Daniel Nugent

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Real nice work !  I did something similar for my master thesis http://diydrones.com/profiles/blogs/quadcopter-performing-visual-gu.... I used the beagle board xm for the high level onboard computer and found it a little slow for the task. Also tried the Raspberry pi, but preformed worse. The Odroid U3 looks promising. Have you looked at alternatives to cameras ? 

  • Thanks for the good tech as always John.

  • nice work would be cool if you recharged on ground rover 

  • @David,

    Environmental conditions are a real concern. I'd love to hear more about your offline tests.

  • Congrats, Daniel, great job!

    I've been working on a very similar vision system for the last months... but of course my implementation is not as close to integration with ArduCopter as yours! I'm using the same landmark concept (a set of concentric rings) and probably the same (or very similar) detection algorithm.

    I currently have a fully functional optical sensor working on Raspberry Pi (with its Raspi camera) which also runs on Gumstix Overo and a Logitech webcam. I will port it to Beaglebone Black, and also to Odroid when I have one at hand. It is now working in "open loop" mode, meaning it can be controlled from the autopilot, but its information is not used for navigation yet: I'm running test flights for building a dataset of captured landmark images in different real conditions.

    My goal is optimizing the landmark recognition algorithm to cope with real-life conditions (you mention one in your post: low light; but also sun reflections, shadows, etc.). In order to attain this, I'm currently running offline tests on the captured dataset using different detection algorithms. The rationale behind this is that vision detection works fine in lab conditions, but is quite sensitive to illumination, motion blur and other factors: the recognition algorithm must be resilient to these real conditions.

    I'd love to share the details and collaborate if you are interested. I would also love to know more details on several integration aspects (for example, the communication protocol between the companion computer and ArduCopter: I have a working protocol defined, but you surely have very interesting information on this).

  • @ Julien

    The current implementation is meant to be simple and not strictly confined for landing on a rover. The hope is to make a system which can be implemented in numerous scenarios and reach a wide user base. A printed target, camera, and companion computer are fairly minimal. LED's would be beneficial but if they can be avoided, that would be ideal.

  • Another idea:

    Put that system on the rover and identify the copter with LEDs strips under its arms. Then send a mavlink command to the copter to reposition it. Normally 1 command should be enough but you can send one at the beginning of the descent and another at the final stage in case of. There are some pros for this management:

    - limit the weight of the copter (no camera neither odroid on it)

    - no power consumption (for odroid, cam may have its own battery)

    - increase the copter autonomy

    - pool the positionning system for several  copters as I guess there will be more copters than rovers (so, cheaper solution)

    - the rover will manage the different landings & take off and will know if it's platform is busy or not.

  • MR60

    Congratulations for this outstanding progress! Let's hope we will see this incorporated in the main arducopter firmware very soon....

  • Great job!

    What about using a bright target (with LEDs) to increase the low-light perfs? This way, you could maybe use a simpler shape.

  • Developer

    Optical flow used for motion detection usually have very sensitive low resolution sensors with high frame rates. At normal video rates (30fps), high speed movements may cover a lot of pixels and introduce motion blur. And searching movement over larger areas will drastically increasing the workload and risk of degraded performance.

This reply was deleted.