Imagine your drone suddenly running low on power - a long way from home, doing it's auto waypoint flying thing.
You have to decide how, where and when to set it down very quickly!
I've been playing with my homebrew Automatic Landing Finder (ALF) that uses a photo to determine where a suitable landing spot would be. The algorithm takes things such as size, visibility etc. into account when determining what spot to pick. The system picks the best candidate among possibly thousands.
The idea is to have a camera pointing downward taking snapshots at regular intervals. Should things go haywire, the system will analyse the last frame where the craft was till stable, and try to find the best landing spot.
Eventually this system will be installed on the next version of the USAV (Unmanned Social Autonomous Vehicle) but for now it's being tested with "fake" images from google earth.
If you had to land anywhere on the picture below, where would it be?
Feeding the image above though the ALF reveals the places that ALF considers as being a more or less good spot to land. The resulting image is greyscale because this is a part of the process. Yellow squares are suitable candidates, and the red square is the chosen optimum landing site. Observe how dark spots (where poor visibility prevents a proper condition estimation and vegitated areas are (mostly) avoided. Did you chose the same spot?
How fast did you determine a proper place to land? The computer took roughly 842 ms to find a good spot.
Below is the same result with all candidates removed, only showing the optimum landing site.
I'd love to get some feedback on what you flying guys out there think is a good emergency landing spot in general as well as what is a bad one! :)
Best regards
Jesper Andersen
Comments
Great idea Jesper!
I've had a go along a similar line of thinking and got pretty good results:
GPS and GIS can't tell if someone parked a car or some construction work is going on at the landing spot. The whole idea is that the drone should be able to find a spot based on how things look at the time the need arises. Also, the drone might have strayed far away from where you intend which make it hard to plan a landing spot in advance.
I'm also working with evolving mission planning, which means that the drone has an influence on where it goes and that it can alter it's flight plan according to what it "discovers" during it's flight. It might need to land to get a soil sample and it needs to determine the optimum landing spot once it has narrowed down the area of interest :)
This is defiantly an interesting idea. However I just have one question, wouldn't it be easier and faster for the Drone to use GPS and GIS to find a good landing spot?
Thanks Jay. I will do an update once I get a chance to test this in real life OR when i get it ported to the Parallella computer.
This is fascinating, and not just for drones either. I'm currently training for my PPL, so if this gets to the point of being operational and fine-tunable for larger aircraft, particularly finding clear spaces large enough for, say, a Cessna 172 and presenting them as options to the pilot, that could be truly awesome.
There is an app "Xavion" which provides dynamic glide data to known airstrips, but if it could be combined with something like this, a pilot in a crunch could be helped a lot.
Best of luck and keep us posted on progress!
I am wondering whether reflectance IR photos(NDVI and other uses in ag) from modified webcam elements might be able to tell the difference here , [1] seems to show black for warmer waters and grey for colder pretty consistently on globe level sat surveys with the IR band to color channel map that that particular study uses.
Using reflectance IR photos/video frames would also solve the resolution as we are just talking about modified cams here and a LOT lower cost than thermal IR cams.
hzl
ps the parallella shop is reputed to be reopening soon will get a bench queen to test they support OpenCL already so should really be able to speed up OpenCV.
[1] http://sti.usra.edu/TRESTE/07_annual_workshop_presentations/07works...
@Rob - Thats a good observation - Maybe it would be an idea that the algorithm favours landing spots that allows for some additional distance to trees etc.
@Randy - Integration with image processors is definitely a "must have" at some point. I think the smartest way is to make a generic software package that is written specifically for a high power/low footprint platform like the Adapteva Parallella. This is something that I'm working on for the USAV system, and this allows for image recognition/tracking, crop analysis, ALF etc...
@Chrisa, @Crady von Pawlak, @Pedals2Paddles - The water discussion is a tough one as you point out. But it's also a matter of how this system is used. To start with I will experiment with this in a way where the autopilot sends the suggested landing spot back to me and awaits a confirmation that it's okay to land there.
It could also be a matter of getting the drone down relatively safe and away from people, where i'd almost prefer a lake over a crowded park.
@Andrew Rabbit - Great suggestion - I will :-)
@A Lurker - I think you are right. The problem with IR cameras are that they are relatively heavy, expensive and have low resolution (unless you are ready to pay some serious dollars :) ).
Great post and great idea! Felixrising is right in that we don't have an interface in ArduPlane/ArduCopter for dealing with image processing sensors. This is very much on my mind these days because for sparkfun we need a way to integrate the object recognition from the Odroid into the navigation of the copter. I haven't really figured out how to do that although it will somehow use the DroneAPI.