4-channel LIDAR laser

Dimensions 8 mm x 5 mm
Peak optical output 85 W at 30 A per channel
Wavelength 905 nm
Pulse length < 5 ns
Operating voltage 24 V

The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

The MEMS chip can operate at up to 2 kilohertz

The company says test samples will be available in 2017, and that commercial models could arrive in 2018.

With mass production, the price should drop to around 40 Euros (US $43.50)

http://spectrum.ieee.org/cars-that-think/transportation/sensors/osr...

http://www.osram-group.de/en/media/news/press-releases/pr-2016/07-1...

Views: 4990

Comment by Gary McCray on November 15, 2016 at 4:17pm

Global,

From You:

"sorry Gary but resolution of Lidar exactly degrades for far objects as camera image.

http://www.lidar-uk.com/how-lidar-works/"

In no way does this make the claim that the accuracy of Lidar degrades over distance.

In fact, Lidar is measured as "Time Of Flight" by very accurate clocks so the accuracy at 1 foot is identical to the accuracy at 1000 feet as it is discrete time units.

This is the actual quote from the above article you linked:

"Light moves at a constant and known speed so the LiDAR instrument can calculate the distance between itself and the target with high accuracy."

If it has 1/4" accuracy at one foot it can have 1/4" accuracy at 1000 feet.

Modern scanning LIDAR systems are now almost all Direct and simply depend on the accuracy of measuring each laser pulses time of flight.

Camera distance determination is done by visual angular offset of lines or "areas" computationally determined to be coincident either from two cameras in a true stereoscopic setup or in the same camera from pictures taken at different times with a known displacement between shots.

With the camera scenario, the further away the visualized object is the less the angular difference between cameras (or successive shots) and therefore the less accurate it is.

In fact this difference in accuracy is extreme

A camera or cameras are in no way an equivalent device to a Lidar.

They can be fine for X/Y measurements as they are an X/Y device, but they completely lack the intrinsic granular accuracy of LIDAR which actively measures the distance to each spot.

It appears to me that your claim of equivalence is completely baseless.

Because it is time scan based, TOF Lidar can be scan rate limited so you can select between update frequency, maximum distance or X/Y resolution for the envelope that best captures what you want.

Also since Lidar actually requires dynamic laser illumination it is limited as to how far the laser can effectively be reflected off the target environment.

But as long as you can achieve a valid reflected return the accuracy is guaranteed to be equivalent to clock accuracy.

The angular separation between stereo camera readings simply becomes less and less as distance increases so the resolution becomes equivalently less and less.

I believe these simple facts are - simply - indisputable.

Comment by Gary McCray on November 15, 2016 at 4:54pm

Hi Ouroboros,

Great concept, I think it has merit.

Many of the parts, module functionality and requirements are much the same or at least very similar.

I don't know if there is anything to be gained by trying to make a mixed chip, but a mixed system certainly.

I am guessing that's where the autonomous car / vehicle industry is already headed in fact.

Both methods have advantages that can be used to complement each other and anything that provides more reliable data can provide better and SAFER navigation.

And I would expect that to the automotive use safety will be paramount.

Best,

Gary

Comment by Global Innovator on November 16, 2016 at 4:28pm

@Gary

once again

"sorry Gary but resolution of Lidar exactly degrades for far objects as camera image.

http://www.lidar-uk.com/how-lidar-works/"

In no way does this make the claim that the accuracy of Lidar degrades over distance.

==

Lidar offers angular resolution, which is not affected by the distance but distance affects linear resolution.

So in theory your Lidar can still offer the same angular resolution at distant objects and high accuracy (limits are known)

but linear resolution (horizontal) degrades with distance, so Lidar can easily miss distant vertical obstacles (like trees, power poles, mobile telephony towers) if not covered by angular resolution at distance.

So in theory and practice, your Lidar can hide distant vertical objects.

Video camera offers geater angular resolution and scans 10Mpix landscape image at one shot at 30-60fps frequency.

So single laser diode Lidar must by pulsed 10M x 30 (or 60) to offer alike landscape image solution.

Since Lidar can miss distant vertical obstacles due degradation in linear (horizontal) resolution, it makes no sense to claim otherwise.

Finally, what we really need is phase array laser and video camera is exactly array scanner and Lidar is still low-tech single point scanner.

Lidar has known limit on pulse frequency.

Video camera can be easily upgraded to 100Mpix matrix.

Lidar is limited by laser diode pulse frequency upper limit.

Since Lidar can hide and re-detect the same obstacles in radom, you are required to cache Lidar scans for the given geolocation to build 3D map on-the-fly not to miss any previously detected vertical obstacle.

Companion computer is too slow to offer such support real-time.

Data processing in the cloud is no more reliable due to communication problems.

So forget Lidar and invest a pocket money in video based obstacle avoidance technology

and spend the rest on the development of LE phase array radar.

Comment by Gary McCray on November 16, 2016 at 10:10pm

Hi Global,

Beam divergence can exactly compensate for scanning dispersion, in this case there are no "holes" in the scan, thus no missed items.

It is not uncommon for laser TOF scanners to scan in precisely this fashion.

Your claim was not missed items in any case, merely an incorrect statement that resolution degrades at distance equivalent to a camera, which is completely untrue.

In an X/Y camera array, you can also miss items at a distance beyond their resolution (or focus) onto the individual pixels.

So, in fact, the phenomenon of missing objects at a distance completely is just as valid for a camera as for a scanned device.

And two additional things are true about the scanned device that are not for the camera.

First, for items that are resolved at a specific distance, that distance measurement is exactly as I said it was far more accurate than any camera system because it is TOF measured in discrete time units as opposed to decreasing angular separation.

Second, the scanning step density and beam spread can be adjusted to ensure 100% coverage to the effective limit of the reflected laser light or as the user wishes.

As a matter of fact it can be adjusted dynamically if desired to first acquire an object then inspect it more closely with reduced beam spread and tighter step density.

(Effectively zooming in on a specific area of interest as desired.)

Arguably you could do something similar with a camera and zoom lens and variable step gimbal, but the computational overhead in determining any useful depth (Z) information would be very significant.

Certainly the speed of light represents a factor that needs to be taken into account, but their are already multiple LIDAR units that send out a coded pulse so that return pulses can be received out of order and still be fully resolved time wise.

And generally you can still achieve a very high resolution coverage without resorting to overlapping pulses at more common vision distances say the 100 to 300 foot range.

And even the chip in this article provides an improved laser system by providing 4 independent lasers that are beam aligned with each other.

Going to a higher resolution camera does make the camera array more sensitive to smaller angular deviations for detecting distance, but it still does not approach the absolute accuracy of the TOF measurement system employed by a LIDAR scanner.

Your statement that LIDAR can miss some object and then need to capture or recapture them requiring caching is true as is also even more true for camera which is dependent on external (generally ambient) illumination for perception of objects and which illumination can change drastically and rapidly.

LIDAR provides it's own fixed quantity highly band width specific illumination which is greatly more predictable and therefore far more immune to this phenomenon that a camera.

So by your analogy, the camera would actually require much more "caching" than a LIDAR.

In fact if beam spread is equivalent to step angle and beam power is sufficient to generally ensure detection of reflected light pulse, caching would be 100% unnecessary.

That is never true for a camera using ambient light.

The point that you are continuously reiterating about LIDAR depth information being no better than stereo camera depth information is simply 100% wrong.

And, of course there is the other kind of LIDAR, namely the one that uses an X/Y array of avalanche photodiodes which each detect in real time the TOF of the reflection a high pulse energy low average energy broad beam laser flash.

This is not scanned, but is dependent on the focus and spread of the lens in front of the X/Y array (which could be "zoomed" too of course.

DJI is proving that cameras can be effectively used for simple object avoidance and I am not disputing that it can be done.

What I am disputing is that it is better than a competent LIDAR setup as the primary basis for competent navigation in a complex (rich) environment.

Note, DJI appears to be developing IR active reflective systems in the INSPIRE 2 as we speak.

I do not know whether they are TOF or simple angular separation.

I believe you are disputing the value of LIDAR based on suppositions that are simply incorrect, possibly even purposely so perhaps in the same manner which politicians feel entitled to try to convince their constituents that things are true which they actually know not to be true.

As a result of that, this will be my last post on this BLOG, it is a game I do not wish to play.

Ether you know the truth and are purposely misrepresenting it in order to string me on or you have a misplaced faith in something for whatever reason.

I am flat, I take things at face value and respond without guile, so I am out of here.

Comment by Hector Garcia de Marina on November 17, 2016 at 12:26am

Lidar has known limit on pulse frequency.

Video camera can be easily upgraded to 100Mpix matrix.

And this is why we will never hear about your "global innovations" Dr. Frankenstein.

 

Comment by Hector Garcia de Marina on November 17, 2016 at 12:43am

@Gary

In case that you do not know it. This guy (I am still believing it is not an actual person but just an AI :P ) has always the same behavior. He scans for keywords in the title and posts, then he pastes a link containing these keywords and make "critic" / negative / non-constructive comments.

It does not matter whether you talk about maths, technology or unicorns. We all have engaged with this troll at least once here :P.

Comment by Global Innovator on November 17, 2016 at 4:29pm

@Gary,

from the original post

"

Dimensions 8 mm x 5 mm
Peak optical output 85 W at 30 A per channel
Wavelength 905 nm
Pulse length < 5 ns
Operating voltage 24 V

The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

The MEMS chip can operate at up to 2 kilohertz

"

Pulse length < 5 ns .... 5ns

Laser diode on 5ns

If laser diode if off 5ns

Single laser diode this LIDAR module can be pulsed at 1,000,000,000 / 10 = 100,000,000 = 100MHz frequency

(in theory)

The MEMS chip can operate at up to 2 kilohertz

limiting the above pulsing frequency or the ability to scan 100M points per second

The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution

So 1,200 points can be scanned in the horizontal plane (covered by 120 degrees)

20 degrees in the vertical plane, with 0.5 degree of resolution.

So 40 points.

So This Lidar has 1,200 x 40 resolution if static

In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

Angular resolution Wikipedia

https://en.wikipedia.org/wiki/Angular_resolution

Basics of trigonometry

https://www.khanacademy.org/math/trigonometry/trigonometry-right-tr...

Visual trigonometry calculator

http://www.visualtrig.com/

We start right triangle calculations, since we can device 120 degree into 2 x 60 degrees

Ajacent is our distance:

70 meters in first case ( pedestrian)

and 200 meters in second case (car)

We need to calculate the opposite and double it to get horizontal line segment covered by the 120 degrees ( in the horizontal plane)

Length of the opposite calculated for the 60 degree angle is 346.41 meters at distance of 200 meters

Doubling this value we get 692.82 meters

So at distance of 200 meters, our Lidar  can scan 1,200 individual points (at native resolution in static mode).

And individual points are seperated by 0,57735 meter, so if your car is 5 meters in length, 9-10 scan points may targeted it.

In case of a pedestrian at a distance of 70 meters

Length of the opposite is 121.24 meters, doubled gives 242.48 meters

So at distance of 70 meters, out Lidar can scan 1,200 individual points within 242,48 meters long horizontal line segment

giving spatial horizontal resolution of 0.20 meter

So Osram is right claiming the ability to detect a man at a distance of 70 meters since thin humans can still be 0.40 meter wide, so can be targeted by 2 or 3 scan points.

Ok, I have projected angular, spherical resolution to a straight horizontal line.

You are free to redo calculations as your homework

As you can see, resolution of Lidar in horizontal plane is alike resolution of 1Mpix camera 1,000 vs. 1,200 points

At a distance of 200 meters, individual scan points a separated by distance of 0.5 meter

so a pedestrian can be missed at a distance of 70 meters if not facing directly to Lidar.

Comment by Laser Developer on November 17, 2016 at 10:56pm

Darius, is that you? Or did you just conveniently forget to add LiDAR beam divergence into your calculation?

Comment by Hector Garcia de Marina on November 17, 2016 at 11:49pm

He also forgot about other 'details' such as parallel processing. Basically, he does not know what he is talking about.

https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

Comment by dionh on November 18, 2016 at 2:06am

Somewhere there is a  psychiatric ward missing a patient

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service