Click on the image to run the video.
At the recent InterDrone Expo in Las Vegas we were overwhelmed by interest in our latest sense-and-avoid technology using the SF40 laser scanner. It runs with 100% data saturation and I have discussed the theory behind this device here. Moving from theory to practice, it might be useful to consider the real-world implications of such a system, since creating and managing a set of data that represents the environment around a UAV presents some interesting challenges.
The first is providing high enough data density that relatively small objects such as flag poles, fence posts and small children aren't missed. In the image above, you can see that there is no "angular separation" between the data points which guarantees that even small obstacles will be detected. The laser achieves this by measuring so fast that successive readings overlap (just), in this case about 1800 readings per revolution of the scanner.
The second challenge is to refresh the entire data set fast enough that moving objects aren't missed but not so fast that the host controller is overwhelmed with data. We need to assume that every obstacle is moving because the UAV is probably moving, but we weren't sure what refresh rate would give a "live" feeling to the data. Clicking on the image above should take you to a short video showing the response time to obstacles appearing in the field of view of the SF40 laser scanner.
This "near-real-time" interaction was done at a surprisingly low 5.1 frames per second. The reason why such a slow refresh rate works is because the image doesn't flicker in the same way as a video. Instead, the data is refreshed sequentially as the scanner rotates and each "dot" remains static between refreshes. This is something that the human eye doesn't do because it takes in the entire scene at once and the retinal latency is too short to hold that scene for longer than 1/30 second. However, a processor can follow the data refresh cycle and keep up to date as the data comes in.
Another practicality to consider is the collection and processing of the data without overloading the flight controller. We tested a Pi2 and found that it easily absorbed the data in real-time and even produced the video output that you see above. A further simplification is available in that the SF40 can "geo-fence" the UAV and provide an alarm to warn even the most basic flight controller of a nearby obstacle.
This project has taught us that by optimizing the measuring and rotation rates of the SF40 laser scanner, it is possible to have both 100% saturated sensing and near-real-time refresh rates without overloading the processing capacity of a relatively small flight controller.
Thanks for reading, LD.
The most likely cause is that your power supply is not correct - Ideally you should use something like a 12V DC supply capable of delivering 1A to make sure that all combinations of speed and torque are catered for. If there is insufficient current available it will stop driving the motor so that you can interrogate the status register without flattening the battery.
There are also onboard diagnostics accessible through the user interface (there is a USB cable supplied with the unit) and you can usually isolate any fault using the menus and diagnostics.
In the status register, bit 15 is 1 i.e: Major system flags are abnormal.
What should I do to make them normal because it is giving wrong readings?
Thank you very much for the response. I resolved the issue.
@ Vinay - have you looked at the instructions in the manual that shows the command set that is used with SF40? With these commands you can ask for distances within a range of angles and use the results to build a collision detection map.
I am currently working on a UAV which can perform SLAM using SF40 .
Can you tell me how to take the data(Distance, Angle) out of SF40 .
Currently lightware terminal is provided to get the data on screen but how to get it into a processing software like MATLAB for online computation.
Can you help me?
@Tridge - The geo-fence output is a combination of a hardwired alarm signal (think of this as a priority interrupt) along with serial data representing which zone inside the geo-fence is affected. Zones can be created and reprogrammed on-the-fly, through the serial port, so that the fence can be adjusted during flight to suit different modes of operation.
The scan geometry on the SF40 is a thin flat disc looking radially outwards with a range of about 100m. It is primarily to suit multi-copters during low level maneuvering. We have a different configuration altogether for forward looking obstacle detection during high speed flight, called the SF41 - I'll save this one for another discussion ;].
This raises an interesting point about the mathematics of hemispherical or spherical scanning when you want to achieve 100% saturated coverage. If you consider the projection of the laser spot on the inside of a theoretical sphere some distance away, the spot covers a surprisingly small percentage of the area of the sphere. In fact, for full coverage you would need to have at least 1.6 million laser spots. At a refresh rate of 5 scans per second, this would be a 27 million words per second data stream including the angular data. I don't think we're ready yet to tackle real-time processing at that rate.
That having been said, there is no difficulty in using a multi-beam laser to give more depth to the disc of the SF40 if the processing capacity is available. More on this subject when we introduce the SF42 early next year.
The issue of "not landing on top of something tall and thin" is a surprisingly complex one. One simple solution is to use fixed laser beams, rather than a rotating scanner, and arrange them to give both downwards and forwards looking obstacle sensing capability. The trick here is to cover as much of the "landing" path as possible without needing a lot of processing or high data rate scanning.
Our current solution involves using a three beam laser module set in a triangular configuration that is aimed downwards at 45 degrees. The idea is that you don't necessarily descend or land vertically. Instead you spiral down so that the beams sweep the ground ahead for vertically projecting objects by looking at their sides, rather than directly on top.
This works for both a fixed wing aircraft on a straight approach or a multi-copter on a spiral or 45 degree descent.
@James - Your exploration of simultaneously moving aircraft and the complications that arise when computing avoidance trajectories is definitely the next level of post-processing that is needed. At this stage we're trying to introduce the subject in a slightly more simplistic way by assuming that the UAV is conducting low level maneuvering at slow speed. This is typical of a search-and-rescue application in uncontrolled airspace rather than the high speed transit corridors where fast, efficient avoidance algorithms are needed.
Thanks for your input, much appreciated. Please comment further so the we can expand public knowledge of this growing field of study.
Looks like the data acquisition rate is really important and it is really good to know that an RPi2 can handle this data stream as well as it apparently does.
Seems to be the basis of a simple but very capable obstacle detection and avoidance system.
A really major breakthrough and $900.00 is much cheaper than the much slower, shorter range and less capable Hokuyo and Sick scanners which can't begin to operate with the real time data density you are providing.
I'm curious about the geo-fence output and how we would integrate this with ArduPilot. What form does the geo-fence output take?
Can you also explain a bit more about the geometry of the scanning? From the picture of the scanner it looks like it doesn't scan a spherical region, so presumably it scans a "donut" shape, with the radius of the donut limited by the range of the lidar and the thickness of the donut set by the optics. Is that right? If so, what are the dimensions, and how do you imagine it would be oriented in an aircraft?
If you had it with the long axis of the donut on the x-y plane of the aircraft then there would be large blind spots top and bottom. If an aircraft is descending while flying forward then it may not see an object till too late.
Very cool! I'd like to put one on my rover, but for my needs your SF40 is currently too pricey.