100% saturated mapping for obstacle sense-and-avoid

Click on the image to run the video.

At the recent InterDrone Expo in Las Vegas we were overwhelmed by interest in our latest sense-and-avoid technology using the SF40 laser scanner. It runs with 100% data saturation and I have discussed the theory behind this device here. Moving from theory to practice, it might be useful to consider the real-world implications of such a system, since creating and managing a set of data that represents the environment around a UAV presents some interesting challenges.

The first is providing high enough data density that relatively small objects such as flag poles, fence posts and small children aren't missed. In the image above, you can see that there is no "angular separation" between the data points which guarantees that even small obstacles will be detected. The laser achieves this by measuring so fast that successive readings overlap (just), in this case about 1800 readings per revolution of the scanner.

The second challenge is to refresh the entire data set fast enough that moving objects aren't missed but not so fast that the host controller is overwhelmed with data. We need to assume that every obstacle is moving because the UAV is probably moving, but we weren't sure what refresh rate would give a "live" feeling to the data. Clicking on the image above should take you to a short video showing the response time to obstacles appearing in the field of view of the SF40 laser scanner.

This "near-real-time" interaction was done at a surprisingly low 5.1 frames per second. The reason why such a slow refresh rate works is because the image doesn't flicker in the same way as a video. Instead, the data is refreshed sequentially as the scanner rotates and each "dot" remains static between refreshes. This is something that the human eye doesn't do because it takes in the entire scene at once and the retinal latency is too short to hold that scene for longer than 1/30 second. However, a processor can follow the data refresh cycle and keep up to date as the data comes in.

Another practicality to consider is the collection and processing of the data without overloading the flight controller. We tested a Pi2 and found that it easily absorbed the data in real-time and even produced the video output that you see above. A further simplification is available in that the SF40 can "geo-fence" the UAV and provide an alarm to warn even the most basic flight controller of a nearby obstacle.

This project has taught us that by optimizing the measuring and rotation rates of the SF40 laser scanner, it is possible to have both 100% saturated sensing and near-real-time refresh rates without overloading the processing capacity of a relatively small flight controller.

Thanks for reading, LD.

Views: 1826

Comment by Paul Meier on September 23, 2015 at 6:48am

You are at the cutting edge of lasers for UAV, congratulations, well done

Comment by Laser Developer on September 23, 2015 at 6:50am

Thanks Paul - we now have the technology but there's still lots to do on the integration side so that these sorts of sensors can become plug-and-play.

Comment by James Dunthorne on September 23, 2015 at 9:45am

I have written a couple of papers which you might find interesting/ useful:

http://www.researchgate.net/publication/269293820_Failure_Boundary_...

https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/16078/3/UKAC...

The first paper allows you to calculate the amount of time/ distance you need to avoid collisions with moving objects. I named the theory "Failure Boundary Estimation" and the paper was presented at the American Control Conference 2014 in Portland, Oregon

Using this technique, you could do things such as:

  • Calculate the distance your laser needs to measure to guarantee to avoid collisions with other static. moving objects (if you presume static objects, it will simplify the maths enormously)
  • Use the information in real time to decide when an avoidance manoeuvre should be initiated
  • Give the pilot an indication of the amount of time before you need to manoeuvre
  • Verify your collision avoidance approach is reliable and safe

The second paper presents a simple method to calculate the amount of time until closest approach between a UAV and an object based on distance and bearing information.

These both seem quite relevant to your work. If you have any questions, feel free to PM me :)

Comment by JesseJay on September 23, 2015 at 11:46am

Very cool!  I'd like to put one on my rover, but for my needs your SF40 is currently too pricey.


Developer
Comment by Andrew Tridgell on September 23, 2015 at 2:47pm

Great stuff!

I'm curious about the geo-fence output and how we would integrate this with ArduPilot. What form does the geo-fence output take?

Can you also explain a bit more about the geometry of the scanning? From the picture of the scanner it looks like it doesn't scan a spherical region, so presumably it scans a "donut" shape, with the radius of the donut limited by the range of the lidar and the thickness of the donut set by the optics. Is that right? If so, what are the dimensions, and how do you imagine it would be oriented in an aircraft?

If you had it with the long axis of the donut on the x-y plane of the aircraft then there would be large blind spots top and bottom. If an aircraft is descending while flying forward then it may not see an object till too late.

Cheers, Tridge

Comment by Gary McCray on September 24, 2015 at 8:04pm

Hi LD,

Excellent results,

Looks like the data acquisition rate is really important and it is really good to know that an RPi2 can handle this data stream as well as it apparently does.

Seems to be the basis of a simple but very capable obstacle detection and avoidance system.

A really major breakthrough and $900.00 is much cheaper than the much slower, shorter range and less capable Hokuyo and Sick scanners which can't begin to operate with the real time data density you are providing.

Best regards,

Gary

Comment by Laser Developer on September 26, 2015 at 6:33am

@James - Your exploration of simultaneously moving aircraft and the complications that arise when computing avoidance trajectories is definitely the next level of post-processing that is needed. At this stage we're trying to introduce the subject in a slightly more simplistic way by assuming that the UAV is conducting low level maneuvering at slow speed. This is typical of a search-and-rescue application in uncontrolled airspace rather than the high speed transit corridors where fast, efficient avoidance algorithms are needed.

Thanks for your input, much appreciated. Please comment further so the we can expand public knowledge of this growing field of study.

Comment by Laser Developer on September 26, 2015 at 12:16pm

@Tridge - The geo-fence output is a combination of a hardwired alarm signal (think of this as a priority interrupt) along with serial data representing which zone inside the geo-fence is affected. Zones can be created and reprogrammed on-the-fly, through the serial port, so that the fence can be adjusted during flight to suit different modes of operation.

The scan geometry on the SF40 is a thin flat disc looking radially outwards with a range of about 100m. It is primarily to suit multi-copters during low level maneuvering. We have a different configuration altogether for forward looking obstacle detection during high speed flight, called the SF41 - I'll save this one for another discussion ;].

This raises an interesting point about the mathematics of hemispherical or spherical scanning when you want to achieve 100% saturated coverage. If you consider the projection of the laser spot on the inside of a theoretical sphere some distance away, the spot covers a surprisingly small percentage of the area of the sphere. In fact, for full coverage you would need to have at least 1.6 million laser spots. At a refresh rate of 5 scans per second, this would be a 27 million words per second data stream including the angular data. I don't think we're ready yet to tackle real-time processing at that rate.

That having been said, there is no difficulty in using a multi-beam laser to give more depth to the disc of the SF40 if the processing capacity is available. More on this subject when we introduce the SF42 early next year.

The issue of "not landing on top of something tall and thin" is a surprisingly complex one. One simple solution is to use fixed laser beams, rather than a rotating scanner, and arrange them to give both downwards and forwards looking obstacle sensing capability. The trick here is to cover as much of the "landing" path as possible without needing a lot of processing or high data rate scanning.

Our current solution involves using a three beam laser module set in a triangular configuration that is aimed downwards at 45 degrees. The idea is that you don't necessarily descend or land vertically. Instead you spiral down so that the beams sweep the ground ahead for vertically projecting objects by looking at their sides, rather than directly on top.

This works for both a fixed wing aircraft on a straight approach or a multi-copter on a spiral or 45 degree descent.

 

Comment by Vinay Konanki on June 26, 2016 at 9:25pm

I am currently working on a UAV which can perform SLAM using SF40 .

Can you tell me how to take the data(Distance, Angle) out of SF40 .

Currently lightware terminal is provided to get the data on screen but how to get it into a processing software like MATLAB for online computation. 

Can you help me?

Comment by Laser Developer on June 27, 2016 at 11:15pm

@ Vinay - have you looked at the instructions in the manual that shows the command set that is used with SF40? With these commands you can ask for distances within a range of angles and use the results to build a collision detection map.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service