Click on the image to run the video.
At the recent InterDrone Expo in Las Vegas we were overwhelmed by interest in our latest sense-and-avoid technology using the SF40 laser scanner. It runs with 100% data saturation and I have discussed the theory behind this device here. Moving from theory to practice, it might be useful to consider the real-world implications of such a system, since creating and managing a set of data that represents the environment around a UAV presents some interesting challenges.
The first is providing high enough data density that relatively small objects such as flag poles, fence posts and small children aren't missed. In the image above, you can see that there is no "angular separation" between the data points which guarantees that even small obstacles will be detected. The laser achieves this by measuring so fast that successive readings overlap (just), in this case about 1800 readings per revolution of the scanner.
The second challenge is to refresh the entire data set fast enough that moving objects aren't missed but not so fast that the host controller is overwhelmed with data. We need to assume that every obstacle is moving because the UAV is probably moving, but we weren't sure what refresh rate would give a "live" feeling to the data. Clicking on the image above should take you to a short video showing the response time to obstacles appearing in the field of view of the SF40 laser scanner.
This "near-real-time" interaction was done at a surprisingly low 5.1 frames per second. The reason why such a slow refresh rate works is because the image doesn't flicker in the same way as a video. Instead, the data is refreshed sequentially as the scanner rotates and each "dot" remains static between refreshes. This is something that the human eye doesn't do because it takes in the entire scene at once and the retinal latency is too short to hold that scene for longer than 1/30 second. However, a processor can follow the data refresh cycle and keep up to date as the data comes in.
Another practicality to consider is the collection and processing of the data without overloading the flight controller. We tested a Pi2 and found that it easily absorbed the data in real-time and even produced the video output that you see above. A further simplification is available in that the SF40 can "geo-fence" the UAV and provide an alarm to warn even the most basic flight controller of a nearby obstacle.
This project has taught us that by optimizing the measuring and rotation rates of the SF40 laser scanner, it is possible to have both 100% saturated sensing and near-real-time refresh rates without overloading the processing capacity of a relatively small flight controller.
Thanks for reading, LD.
Comments
I have written a couple of papers which you might find interesting/ useful:
http://www.researchgate.net/publication/269293820_Failure_Boundary_...
https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/16078/3/UKAC...
The first paper allows you to calculate the amount of time/ distance you need to avoid collisions with moving objects. I named the theory "Failure Boundary Estimation" and the paper was presented at the American Control Conference 2014 in Portland, Oregon
Using this technique, you could do things such as:
The second paper presents a simple method to calculate the amount of time until closest approach between a UAV and an object based on distance and bearing information.
These both seem quite relevant to your work. If you have any questions, feel free to PM me :)
Thanks Paul - we now have the technology but there's still lots to do on the integration side so that these sorts of sensors can become plug-and-play.
You are at the cutting edge of lasers for UAV, congratulations, well done