3D Robotics

I was curious how well LIDAR-Lite, which is just a laser range finder out of the box, would work as a full sweeping LIDAR unit, so I set up this demo unit. The unit updates at 100Hz, so to detect a 10cm object (like a telephone pole) within a 5m arc at 10m distance (with 2x oversampling), I calculate that you need to sweep 30 degrees back and forth each second. (5m = 10m*sin (30)).

That's totally within the speed of a regular servo, so I threw together this test. It just uses APM as an Arduino, sends the LIDAR-Lite and servo position data over serial and reads and graphs the data with a Processing sketch on the laptop.  The Arduino code and Processing sketch are here: LIDAR%20sweep.zip

BTW, the LIDAR-Lite sensor is already fully supported by the APM code as a range finder (for altitude hold with copters and/or autolanding assist with planes). You can read more about using it here

The code for object avoidance using LIDAR-Lite is already written (thanks to Robert Lefebvre), and this would just add a sweeping component. Here's a video of it working in static mode, on a 2-axis stabilized gimbal:

Some observations:

  • It works! I can spot telephone poles with no problem
  • That said, the effective range of the LIDAR-Lite unit in this application is just 10m. I'm not using a low-pass filter on the cable, which is normally recommended, and I'm just taking one data point at each position, so I think that can be improved with a smarter sampling strategy.
  • In practice, this would be better implemented by moving a mirror, not the entire LIDAR unit, to avoid shaking and other off-axis movement that can get in the way of sampling
  • Laser range finders are SO MUCH BETTER than sonar. 

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • I have been working on a design for my next fixed-wing incorporating the great new rangefinder. The inspiration came from a flight during which my plane dipped below a hilltop breaking my LOS. I luckily had good RTH settings which took the plane back up to altitude, but this started me thinking about obstacle avoidance and terrain following.

    I considered attaching a laser module to a servo or bouncing the laser off a of a mirror attached to a servo to take regular measurements at regular angles/intervals. I settled on a keep-it-simple solution integrating the module into the airframe at a 45 fixed degree angle. The laser pointing slightly downward/forward to clear the prop at the front of the plane and sense 20m in front of and below the airframe.

    This limits the amount of information to the direction of travel, but this should be adequate for relatively slow flight in open areas. It also cuts down on the complexity-related risks I have run into (servo flutter, failure, noise from other systems) as well as keeping weight down.

    In terms of logic; when an object is detected within a given distance from the plane: 

    1) The plane executes a right turn (within 5-10m of the nose at the point of measurement),
    2) During the turn the AP takes the airframe up a given distance (1-2m) to a higher altitude,
    3) When the airframe returns to straight and level flight at the original heading, another reading is taken.

    If the obstruction remains, the pattern is repeated.
    If there is no obstruction, the plane continues on the given heading,

    During the turn the laser could gather more information about the surrounding, but it is important to realize that the plane will initially execute the turn into a "blind spot" to the right of the plane. I generally fly in treeless areas, can visually clear the turn area from the ground and an on-board FPV camera. I also have a relatively tight radius (10m) so this combination is adequate for my purposes.

  • As someone who has been flying with laser scanners commercially (under ground...avoids the law), for over a year now, I have to agree 100% with what LD states. We currently use a SICK scanner, and fly in a 8' x 8' tunnels,  but have tried all other low cost solutions.

    We have moved on the TOF camera's now, for much of what LD states.

    Getting very good results, with multi camera's platform, the only issue being the size of the data stream, which presents other problems.

    I would also suggest, it is best to use a separate system, for the avoidance, then just command a autopilot via RC pwm, this allows you to use any unit, and not worry about code / hardware development outside your hardware.

  • Good work guys - I'll be joining in with my own version soon enough.

  • I'm stuck at in empasse with mine, after realising I have no pixhawks left to use as the test bed. I have plenty APMs ' can I rig up the i2C splitter to an APM? It looks to be pixhawk only.

    http://store.3drobotics.com/products/pixhawk-i2c-splitter

  • Hi Everybody,

    I am very happy to see such a positive and interested group of people on this topic.

    I agree with LD that mirrors are not an optimal solution, especially now that we have light inexpensive laser rangefinders.

    I wasn't actually promoting the idea of a mirror based system, they are limited as he says, although, if you have the distance from laser to the mirror short, the backscatter from the mirrors surface can be taken into account by delaying the timing start till after the backscatter light would have gotten to the mirror (and back). 

    And although galvenometers are actually a very effective way of moving something light like a first surface mirror, they work in strong electromagnetic tension (balance) which uses an unacceptably high amount of current for our applications.

    A brushless motor (gimbal motor) in this application should have most of the advantages of a galvo without as much need for excessive power (although it actually works like a rotary group of small galvos on the single pole level).

    A brushless gimbal can also easily accommodate the mass of a small laser scanner.

    Servos on the other hand are intrinsically more problematic because their inner geared mechanism makes them somewhat jerky, especially when reversed or started and stopped because of gear lash.

    I also agree with LD that what you really want to do is get the information you actually need, so rather than looking at them anthropomorphistically as a human like "vision" system, it is better to consider them as an appropriate sensor system for what your navigation controls actually need to accomplish.

    Generally for us at this stage object avoidance and possibly path finding are realizable goals.

    Trying for the bigger more global goals of mapping and eventually even SLAM take a lot more horsepower both to produce a rich point cloud and to evaluate and make use of it.

    Too much information is as big a problem as too little.

    At this point a simple X/Y scanned laser rangefinder can provide sufficient data for a good start at object detection and avoidance and with a bit of work at least limited path finding.

    It could be improved by allowing its scan (spatial resolution) to be dynamically moved or changed to allow it to "focus" on "objects" of interest.

    I'm starting to work on this myself, I think that relative navigation based on perception of the actual immediate environment is the most significant enabling feature, not only for our "drones" but for robotics in general.

    Best regards,

    Gary

  • There is a big difference between a mapping LIDAR and a collision avoidance system.

    When designing a product to perform a specific task there are numerous trade-offs. Rob is correct in suggesting that there are relationships between accuracy and price, weight and performance and so on. To arrive at an effective and economical solution it's sometimes necessary to redefine the problem in non-conventional ways to avoid technical or economic dead ends.

    Scanning laser systems have evolved from the field of surveying where accuracy is everything. Size, weight and price are sacrificed in exchange for high precision. The only thing that these systems have of value to an anti-collision system is their non-contact, measuring technology. Everything else is pretty much baggage.

    In redefining the LIDAR problem into an anti-collision problem we discover that there are totally different trade-offs to be made:

    1. Range and update rate are more important than accuracy and resolution - detecting an obstacle early gives more time to take corrective action. It doesn't matter whether the obstacle is 0.05 degrees from dead center or 1 degree from dead center, you're still going to take evasive action.

    2. Data processing and communications overheads must be kept to a minimum - you do not have time to collect and process a high resolution 3D map of your environment when you're having enough difficulty just keeping the bird in the air. Instead, you want a basic warning system that tells you to change course to the left or right or slow down whilst you reconsider your options.

    3. SLAM is not collision avoidance - ground based robots operating inside a room or on the road have limited space and closed boundaries. They need to know where those boundaries are at all times in order to make navigational decisions. Aerial vehicles, on the other hand, operate in the exact opposite type of environment. They can go in any direction until they run out of fuel so there's no point in mapping empty space. If you want to know where you are, look at the GPS. However, sometimes there are unexpected things in the way and they are coming towards you really, really fast.

    4. Weight, weight, weight - enough said.

    5. The system must work in difficult conditions - the whole point about anti-collision is that you need it when things are going wrong, not when you're trying to land in a space the size of Area 51 on a clear day. That means it must tolerate poor or no available light, complex contrast patterns and numerous, simultaneous obstacles.

    IMHO, there is a difference between a useful airframe with its control systems versus the payload it's going to carry. So even if that payload is a survey grade LIDAR, a FLIR camera and a GoPro, I think you still need a collision avoidance system. That system may also have SLAM capability, but in contrast to Chris' experiment, in normal running mode you don't even want to know that the system is there until it detects an obstacle that needs avoiding. Only once that happens do you start overriding navigational directives.

  • I see that for simple and effective obstable avoidance, this setup, or the brushless one as reported in a previous comment, would greatly do their job. I agree that for slam you need something more reliable and fast (I've been working with Hokuyo LRF, but they are very expensive).

  • Very interesting comments LD.

    I get the feeling that the realities of the technology mean that if we have weight and cost restraints, we will actually need to consider two different systems for two different uses.  I had assumed that my work with the rangefinder on a servo gimbal, was an evolutionary dead-end.  But your comments reveal that for simple collision avoidance, at longer range, it allows use of a higher powered rangefinder for a reasonable cost and weight.

    However, if you wish to do SLAM type operation, a proper scanning system is better.  But for a given cost/weight budget, the range of this will end up being shorter than a rangefinder system.  So you really need to determine which you want to do, as the two systems do not overlap.

    Do you think that is correct?

  • Amazing...

  • MR60

    Woaw, impressed. GG chris.

This reply was deleted.