image processing for precision landing

Hi all,

I’m looking for a straightforward solution to have a quad land on a small outdoor target area of 1ft x 1ft. Fully autonomous. Acceptable accuracy would be +/- 2 inches of the target center.

We know the GPS location of the landing site, which the quad will use as a waypoint to get to the area and start hovering at altitude. Then for the actual precision landing and descent, a visual marker could be used, ideally I wouldn’t need anything electronic on the ground to make things simple (unless it's really cheap as in less than $50 and leads to a much simpler solution). Also all computing systems must be onboard the quad.

I was thinking of combining a Pixhawk, a Raspberry Pi 3 and its V2 camera module (8MP) to do computer vision with OpenCV. I would like to keep things simple and if possible limit the image recognition to basically a color mask + find contours. First the Pixhawk will take the quad to the GPS location. Then in “locate & descend” mode the RPi3 would start scanning and feed the Pixhawk with (x,y,z) velocity vectors to get closer to target and land.

Will this be good enough? Any potential roadblocks that I should anticipate?

While searching on the forum I found a UC Berkeley project [1] that seems related although it’s 2 years old. I also came across the PX4Flow work but I’m hoping I can do without.

Thanks!

[1] http://diydrones.com/profiles/blog/show?id=705844%3ABlogPost%3A1789944

You need to be a member of diydrones to add comments!

Join diydrones

Replies

  • Edit: Sorry for some reason previous posts didn't load in my browser so most of these you've already been told about.

    Hi, there's been some work in the past:

    http://diydrones.com/profiles/blog/show?id=705844%3ABlogPost%3A1877...

    http://diydrones.com/profiles/blogs/precision-land-arducopter

    https://github.com/djnugent/SmartCamera

    https://github.com/squilter/target-land

    There's some more detailed work at the moment by the devs on precision landing, but not sure of what that entails.

    You could also look at hardware solution:

    http://irlock.com/collections/precision-landing

  • Thanks Thomas for adding some more.  Nice videos with the IR technology :-)

    I took a look at the ZED+TX1 video.  Any idea why the vehicle doesn't hit the center of the target?  Is the software configured to just be happy with putting all 4 legs down on the support structure?  I was hoping for centimeter-accuracy with such hardware.  Also unclear why >4m would be an issue, the target seems pretty large.

    Software/algorithm wise, are there any recommendations?  I've played a bit with OpenCV and a few different marker shapes, the detection seems to work fine in a relatively controlled environment.  I yet have to test with my flight controller and see how the vehicle behaves.  Or maybe I'll try with a simulator first, please let me know if there's a good way to do this.

  • Sorry I am late to this interesting conversation. :)  

    The reason we (IR-LOCK) have stuck with the beacon system is because we can get very reliable target detection with this technique, at distances of 0 to 15 meters. Here are some detection demos in adverse lighting conditions (link). And here is the full collection of flight test videos in a wide variety of lighting conditions (link).

    Also, Randy made a very nice precision landing demo video using the ZED stereo camera and TX1 for image processing (link). I am impressed by the performance. However, his video description suggests that the detection is not reliable at heights above 4 meters.... But I am still impressed by the performance with a passive target. It is a challenging problem.

    Feel free to send me any questions at thomas@irlock.com or here on the forum. 

  • @Dan J Pollock

    >> I was looking at the Pixhawk II specs the other night. Triple redundant and centimeter accuracy on the gps side. 

    Where did you read this?  As far as I know GPS doesn't get that good, only if used with RTK but then the cost is in the hundreds of dollars.

  •  I was looking at the Pixhawk II specs the other night. Triple redundant and centimeter accuracy on the gps side. With something like this I don't think it would be that difficult. Once it found the target with the vision side and locked into those coords its just an auto landing. Depth cams won't do you much good. They aren't effective close up. So you would have to use something for the distance between its closest accurate point and the ground. Sonar or lidar perhaps maybe realsense. Lots more complexity and stuff to go wrong. The IR beacons seem to be the way to go now from what I've read.

  • You might want to check out FlytPod, there's a bunch of discussions on this site about it, this one is quite recent and relevant.

    I have not played at all with this but my guess is that, as long as your GPS gets you close enough (which any GPS by and large does) and your target is extremely different from its surroundings, you should be able to make reasonable vision based guesstimates.

  • Hello,

    Usage of Marker Tags, could be a good start , you can refer to this section in openCV about the usage of marker for pose estimation: http://docs.opencv.org/3.1.0/d5/dae/tutorial_aruco_detection.html

    There are lot of experimentation an code available on Parrot using the Ar.Drone stack, and I saw few functional code for landing on a marker.

    Here is a good paper explaining the theory:  Quadcopter Automatic Landing on a Docking Station

    And if you are interested, you can adapt Randy's Balloon Popper code.

    Have fun.

  • @Anthony E - this is not my area of expertise, so I'm sure others can give you more accurate information.

    We have been involved in a number of projects using image recognition and the Pi camera. The ability of a camera system to "automatically" produce high quality images is affected by the ambient conditions. Background light affects the exposure time and this in turn affects the clarity of the image when operating from a moving platform (longer exposure times produce more blur). This becomes more complicated if there are bright and dark areas in the background that cause the exposure time to keep changing, this can lead to regularly over and under exposed images.

    The Pi camera is set at a fixed focus, nominally at quite a close range, so the image becomes blurred when you get further away. There is a tool that you can print for yourself that lets you manually change to a different focal point but then you will lose close range clarity.

    Added to the loss of image clarity is the problem of target identification. Even a relatively simple, black and white target can look different under different lighting conditions, especially if the surface of the target has any "glossy" characteristics and the camera is out of focus. The target also has a different apparent size at different distances making it more difficult to identify against a visually cluttered background or when shadows fall across the ground in a regular pattern. 

    From what I have read, it looks like most people can get a simple system to work indoors and outdoors under a limited set of conditions. When the environment becomes becomes more complex, even the simplest identification tasks get more difficult.

  • @Laser Developer - Could you please expand on "less than ideal" conditions, and on the Pi camera?

  • @Anthony E - Thomas' system uses an IR transmitter on the ground and with the camera on the drone, presumably having an IR filter to reduce background clutter. From what I've seen this system seems to have a very good signal-to-noise ratio with little background clutter. I've got one of his transmitters and it really pumps out IR making it very clear on my IR cameras.

    From a design point of view there's a lot to be said for "active" systems that have clearly defined targets. Passive systems that rely on image recognition seem to be a lot more problematic when conditions are less than ideal - especially if you are planning to use the Pi camera ;).

This reply was deleted.

Activity