Osram's Laser Chip for Lidar

4-channel LIDAR laser

3689704369?profile=original

Dimensions 8 mm x 5 mm
Peak optical output 85 W at 30 A per channel
Wavelength 905 nm
Pulse length < 5 ns
Operating voltage 24 V

The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

The MEMS chip can operate at up to 2 kilohertz

The company says test samples will be available in 2017, and that commercial models could arrive in 2018.

With mass production, the price should drop to around 40 Euros (US $43.50)

http://spectrum.ieee.org/cars-that-think/transportation/sensors/osrams-laser-chip-for-lidar-promises-supershort-pulses-in-a-smaller-package

http://www.osram-group.de/en/media/news/press-releases/pr-2016/07-11-2016

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Good point @Laser Developer,

    since you exactly developed Lidar on your own.

    What is Lidar beam divergence in case of your equipment ?

    Since none data on Lidar beam divergence have been disclosed by Osram, or it's too early to guess since project is under development, general estimate is published by Lidar wiki

    http://wiki.gis.com/wiki/index.php/LIDAR

    "

    Beam divergence Unlike a true laser system, the trajectories of photons in a beam emitted from a LIDAR instrument deviate slightly from the beam propagation line (axis) and form a narrow cone rather than the thin cylinder typical of true laser systems. The term “beam divergence” refers to the increase in beam diameter that occurs as the distance between the laser instrument and a plane that intersects the beam axis increases. Typical beam divergence settings range from 0.1 to 1.0 millirad. At 0.3 millirad, the diameter of the beam at a distance of 1000 m from the instrument is approximately 30 cm. Because the total amount of pulse energy remains constant regardless of the beam divergence, at a larger beam divergence, the pulse energy is spread over a larger area, leading to a lower signal-to-noise ratio.

    "

    "

    the diameter of the beam at a distance of 1000 m from the instrument is approximately 30 cm

    "

    so 6cm at a distance of 200 m

    or 2 cm at a distance of 70 m

    --

    "

    So at distance of 70 m, out Lidar can scan 1,200 individual points within 242,48 m long horizontal line segment

    giving spatial horizontal resolution of 0.20 m

    "

    giving spatial horizontal resolution of 0.20 m +/- 0.02 m  at a distance of 70m

    "

    At a distance of 200 m, individual scan points are separated by distance of 0.5 m

    so a pedestrian can be missed at a distance of 200 m if not facing directly to Lidar.

    "

    horizontal , linear separation at a distance of 200m : 0.5m +/- 0.06m (error due to beam divergence)

    not bad since the above are general estimates only

    and

    "

    so a pedestrian can be missed at a distance of 200 m if not facing directly to Lidar.

    "

    and the above looks to be still true, not affected by Lidar beam divergence (error)

    LIDAR - GIS Wiki | The GIS Encyclopedia
  • Somewhere there is a  psychiatric ward missing a patient

  • He also forgot about other 'details' such as parallel processing. Basically, he does not know what he is talking about.

    https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

    Dunning–Kruger effect
    In the field of psychology, the Dunning–Kruger effect is a cognitive bias in which people mistakenly assess their cognitive ability as greater than i…
  • Darius, is that you? Or did you just conveniently forget to add LiDAR beam divergence into your calculation?

  • @Gary,

    from the original post

    "

    Dimensions 8 mm x 5 mm
    Peak optical output 85 W at 30 A per channel
    Wavelength 905 nm
    Pulse length < 5 ns
    Operating voltage 24 V

    The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

    The MEMS chip can operate at up to 2 kilohertz

    "

    Pulse length < 5 ns .... 5ns

    Laser diode on 5ns

    If laser diode if off 5ns

    Single laser diode this LIDAR module can be pulsed at 1,000,000,000 / 10 = 100,000,000 = 100MHz frequency

    (in theory)

    The MEMS chip can operate at up to 2 kilohertz

    limiting the above pulsing frequency or the ability to scan 100M points per second

    The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution

    So 1,200 points can be scanned in the horizontal plane (covered by 120 degrees)

    20 degrees in the vertical plane, with 0.5 degree of resolution.

    So 40 points.

    So This Lidar has 1,200 x 40 resolution if static

    In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

    Angular resolution Wikipedia

    https://en.wikipedia.org/wiki/Angular_resolution

    Basics of trigonometry

    https://www.khanacademy.org/math/trigonometry/trigonometry-right-tr...

    Visual trigonometry calculator

    http://www.visualtrig.com/

    We start right triangle calculations, since we can device 120 degree into 2 x 60 degrees

    Ajacent is our distance:

    70 meters in first case ( pedestrian)

    and 200 meters in second case (car)

    We need to calculate the opposite and double it to get horizontal line segment covered by the 120 degrees ( in the horizontal plane)

    Length of the opposite calculated for the 60 degree angle is 346.41 meters at distance of 200 meters

    Doubling this value we get 692.82 meters

    So at distance of 200 meters, our Lidar  can scan 1,200 individual points (at native resolution in static mode).

    And individual points are seperated by 0,57735 meter, so if your car is 5 meters in length, 9-10 scan points may targeted it.

    In case of a pedestrian at a distance of 70 meters

    Length of the opposite is 121.24 meters, doubled gives 242.48 meters

    So at distance of 70 meters, out Lidar can scan 1,200 individual points within 242,48 meters long horizontal line segment

    giving spatial horizontal resolution of 0.20 meter

    So Osram is right claiming the ability to detect a man at a distance of 70 meters since thin humans can still be 0.40 meter wide, so can be targeted by 2 or 3 scan points.

    Ok, I have projected angular, spherical resolution to a straight horizontal line.

    You are free to redo calculations as your homework

    As you can see, resolution of Lidar in horizontal plane is alike resolution of 1Mpix camera 1,000 vs. 1,200 points

    At a distance of 200 meters, individual scan points a separated by distance of 0.5 meter

    so a pedestrian can be missed at a distance of 70 meters if not facing directly to Lidar.

    Angular resolution
    Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to dis…
  • @Gary

    In case that you do not know it. This guy (I am still believing it is not an actual person but just an AI :P ) has always the same behavior. He scans for keywords in the title and posts, then he pastes a link containing these keywords and make "critic" / negative / non-constructive comments.

    It does not matter whether you talk about maths, technology or unicorns. We all have engaged with this troll at least once here :P.

  • Lidar has known limit on pulse frequency.

    Video camera can be easily upgraded to 100Mpix matrix.

    And this is why we will never hear about your "global innovations" Dr. Frankenstein.

     

  • Hi Global,

    Beam divergence can exactly compensate for scanning dispersion, in this case there are no "holes" in the scan, thus no missed items.

    It is not uncommon for laser TOF scanners to scan in precisely this fashion.

    Your claim was not missed items in any case, merely an incorrect statement that resolution degrades at distance equivalent to a camera, which is completely untrue.

    In an X/Y camera array, you can also miss items at a distance beyond their resolution (or focus) onto the individual pixels.

    So, in fact, the phenomenon of missing objects at a distance completely is just as valid for a camera as for a scanned device.

    And two additional things are true about the scanned device that are not for the camera.

    First, for items that are resolved at a specific distance, that distance measurement is exactly as I said it was far more accurate than any camera system because it is TOF measured in discrete time units as opposed to decreasing angular separation.

    Second, the scanning step density and beam spread can be adjusted to ensure 100% coverage to the effective limit of the reflected laser light or as the user wishes.

    As a matter of fact it can be adjusted dynamically if desired to first acquire an object then inspect it more closely with reduced beam spread and tighter step density.

    (Effectively zooming in on a specific area of interest as desired.)

    Arguably you could do something similar with a camera and zoom lens and variable step gimbal, but the computational overhead in determining any useful depth (Z) information would be very significant.

    Certainly the speed of light represents a factor that needs to be taken into account, but their are already multiple LIDAR units that send out a coded pulse so that return pulses can be received out of order and still be fully resolved time wise.

    And generally you can still achieve a very high resolution coverage without resorting to overlapping pulses at more common vision distances say the 100 to 300 foot range.

    And even the chip in this article provides an improved laser system by providing 4 independent lasers that are beam aligned with each other.

    Going to a higher resolution camera does make the camera array more sensitive to smaller angular deviations for detecting distance, but it still does not approach the absolute accuracy of the TOF measurement system employed by a LIDAR scanner.

    Your statement that LIDAR can miss some object and then need to capture or recapture them requiring caching is true as is also even more true for camera which is dependent on external (generally ambient) illumination for perception of objects and which illumination can change drastically and rapidly.

    LIDAR provides it's own fixed quantity highly band width specific illumination which is greatly more predictable and therefore far more immune to this phenomenon that a camera.

    So by your analogy, the camera would actually require much more "caching" than a LIDAR.

    In fact if beam spread is equivalent to step angle and beam power is sufficient to generally ensure detection of reflected light pulse, caching would be 100% unnecessary.

    That is never true for a camera using ambient light.

    The point that you are continuously reiterating about LIDAR depth information being no better than stereo camera depth information is simply 100% wrong.

    And, of course there is the other kind of LIDAR, namely the one that uses an X/Y array of avalanche photodiodes which each detect in real time the TOF of the reflection a high pulse energy low average energy broad beam laser flash.

    This is not scanned, but is dependent on the focus and spread of the lens in front of the X/Y array (which could be "zoomed" too of course.

    DJI is proving that cameras can be effectively used for simple object avoidance and I am not disputing that it can be done.

    What I am disputing is that it is better than a competent LIDAR setup as the primary basis for competent navigation in a complex (rich) environment.

    Note, DJI appears to be developing IR active reflective systems in the INSPIRE 2 as we speak.

    I do not know whether they are TOF or simple angular separation.

    I believe you are disputing the value of LIDAR based on suppositions that are simply incorrect, possibly even purposely so perhaps in the same manner which politicians feel entitled to try to convince their constituents that things are true which they actually know not to be true.

    As a result of that, this will be my last post on this BLOG, it is a game I do not wish to play.

    Ether you know the truth and are purposely misrepresenting it in order to string me on or you have a misplaced faith in something for whatever reason.

    I am flat, I take things at face value and respond without guile, so I am out of here.

  • @Gary

    once again

    "sorry Gary but resolution of Lidar exactly degrades for far objects as camera image.

    http://www.lidar-uk.com/how-lidar-works/"

    In no way does this make the claim that the accuracy of Lidar degrades over distance.

    ==

    Lidar offers angular resolution, which is not affected by the distance but distance affects linear resolution.

    So in theory your Lidar can still offer the same angular resolution at distant objects and high accuracy (limits are known)

    but linear resolution (horizontal) degrades with distance, so Lidar can easily miss distant vertical obstacles (like trees, power poles, mobile telephony towers) if not covered by angular resolution at distance.

    So in theory and practice, your Lidar can hide distant vertical objects.

    Video camera offers geater angular resolution and scans 10Mpix landscape image at one shot at 30-60fps frequency.

    So single laser diode Lidar must by pulsed 10M x 30 (or 60) to offer alike landscape image solution.

    Since Lidar can miss distant vertical obstacles due degradation in linear (horizontal) resolution, it makes no sense to claim otherwise.

    Finally, what we really need is phase array laser and video camera is exactly array scanner and Lidar is still low-tech single point scanner.

    Lidar has known limit on pulse frequency.

    Video camera can be easily upgraded to 100Mpix matrix.

    Lidar is limited by laser diode pulse frequency upper limit.

    Since Lidar can hide and re-detect the same obstacles in radom, you are required to cache Lidar scans for the given geolocation to build 3D map on-the-fly not to miss any previously detected vertical obstacle.

    Companion computer is too slow to offer such support real-time.

    Data processing in the cloud is no more reliable due to communication problems.

    So forget Lidar and invest a pocket money in video based obstacle avoidance technology

    and spend the rest on the development of LE phase array radar.

    http://www.lidar-uk.com/how-lidar-works/
  • Hi Ouroboros,

    Great concept, I think it has merit.

    Many of the parts, module functionality and requirements are much the same or at least very similar.

    I don't know if there is anything to be gained by trying to make a mixed chip, but a mixed system certainly.

    I am guessing that's where the autonomous car / vehicle industry is already headed in fact.

    Both methods have advantages that can be used to complement each other and anything that provides more reliable data can provide better and SAFER navigation.

    And I would expect that to the automotive use safety will be paramount.

    Best,

    Gary

This reply was deleted.