4-channel LIDAR laser

Dimensions 8 mm x 5 mm
Peak optical output 85 W at 30 A per channel
Wavelength 905 nm
Pulse length < 5 ns
Operating voltage 24 V

The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

The MEMS chip can operate at up to 2 kilohertz

The company says test samples will be available in 2017, and that commercial models could arrive in 2018.

With mass production, the price should drop to around 40 Euros (US $43.50)

http://spectrum.ieee.org/cars-that-think/transportation/sensors/osr...

http://www.osram-group.de/en/media/news/press-releases/pr-2016/07-1...

Views: 5004

Comment by Patrick Poirier on November 14, 2016 at 3:55pm

Well Mr McCray, you keep on adding interesting topics here :-) ,

so I will divert from the main subject to get into the MIT Phased Array Lidar. That is a full solid state technology indeed , quite similar to phased array RADAR , but with a significant difference (for the moment); it is single dimension, it is just scanning horizontal and there is no vertical sweep. 

On the other hand, MEMS offers 120 x 20 degrees directional scan within a plane ( actually it is 4 lasers arranged in a 4 x (30x20) ). And MEMS is already a proven and reliable technology embedded within most of our existing IMU, they are proven quite efficient and can be thermally compensated at low price.

Just like gyros, LIDAR are shifting from motors to MEMS and ultimately to solid state. But getting a 2D Phased Array Lidar  will take some time and the price will be probably higher than MEMS for equivalent specs. This assumption is based on the strong commitment from the manufacturer that just acquired the technology: "We intend to make lidar an affordable feature for every new-built car worldwide", says Peter Schiefer, President of the Automotive division at Infineon Technologies AG.

Allway a pleasure to have these discussions with you Gary :-)

Comment by Global Innovator on November 14, 2016 at 4:14pm

Ok, 2 competitors is better than none.

Could you tell me why 3D video depth mapping in daylight (at 5-20Mpix resolution) stalled ?

You can get 5-20Mpix camera at $10, full stereo vision system at $100 (twin camera 3D smartphone)

30fps or 60fps is provided, so depth maps can be generated on-the-fly, providing more depth details live than any LIDAR system offered on the market.

Ok, at night, 3D vision, depth mapping technology fails to work properly.

But there is nothing hot to get blinded by Ts of car LIDAR lasers, if you live close to one high-traffic highways in next few years.

Just another piece of cangerogenics is coming soon.

Comment by Gary McCray on November 14, 2016 at 5:52pm

Hi Patrick, basically I couldn't agree more.

The MIT Lidar chip is looking at an all up cost for a 1 direction scan of $10.00, of course you can add a servo for 5 bucks and have a 3 dimension scan.

That said in its current incarnation and power level it is pretty near sighted.

But it is truly tiny low power and suitable for a lot of applications.

On the other hand the OSRAM chip is slated to be ~$43.00 in manufacturing quantities and will require a MEMs unit and TOF light sensor which will each probably cost more than the chip to say nothing of the CPU power necessary just to run it all. So realistically at least a $200.00 solution.

That said, it is definitely the solution I would want and $200.00 is peanuts for what it can do - operate in the real world in real lighting at normal people and vehicle speeds.

As it stands the MIT chip with a servo would make a good vacuum cleaner or not very fast robot pet, but the OSRAM based system can be the basis for almost anything.

And Global, the problem with using cameras for this is simple CPU POWER, the output of a LIDAR is a 3D point cloud, by far the most useful form of information for extracting data about the environment and object around you.

A camera is light and dark and maybe color, you have to interpret everything to be able to make sense of it in a structural 3D mode.

Much of our brain is devoted to doing exactly that.

And although stereo cameras can provide a basis for low resolution 3D position information, it is of very low quality and the necessary edges are often hard to line up.

So, in summary: LIDAR exactly what you wanted to know, camera a lot of processing power to get at anything of value. 

Of course they can be used together to enhance discrimination and even to lower CPU requirements for equivalent information.

Computers see differently than people, here we have an opportunity to adjust the process to be optimized for the computer.

The key to not presenting an eye hazard is very simple, high pulse power - low average power, it is actually easy to keep the average power so low it presents no hazard at all, in fact well below normal ambient light.

I use to work in pulse power and this is one of the really magic aspects of it.

Also, generally the lasers operate in Infrared, our eyeballs are made to focus visible light and IR is scattered and out of focus on the retina (not brought to a point.) so even less threat.

Best Regards,

Gary

Comment by Patrick Poirier on November 14, 2016 at 5:58pm

LOL !!  Global Innovator just like the song... ''she blinded me with science''... 

3D is stalled, simply because its not trendy but there are excellent product already available on the market. You probably know the ZED stereo camera, here Randy made a pretty nice demonstration of this device with a TX1: https://www.youtube.com/watch?v=5AVb2hA2EUs

For me, the actual tends are:

- ADAS : Self driving car requires a lot of sensors and a lot of R&D is  made  worldwide to get the next wave of sensors and this is confirmed this particular topic

- AI: Neural Networks are making amazing progress thanks to GPU and new generation of ASIC like Movidius. This is just the beginning and it is still missing tools, techniques , and the most important a semantic to describe and share knowledge between systems, what Gary is calling ''awareness''. This is mostly jpeg or monocular video for the moment, but we can expect stereoscopic and multiscopic  AI systems one day.

-Augmented and Virtual reality:  This is the realm of large scale slam, either real:google maps, virtual:3D games or mixed :Pokemon GO.,,not really my bag.

So with all these technological advance, a fully autonomous self guided UAV that can complete a mission within a cluttered and ever changing environment is plausible in a foreseeable future. I have to admit the the DJI MAVIC seem to have already stepped into this future

Comment by Global Innovator on November 14, 2016 at 7:00pm

Better video comes directly from

https://www.stereolabs.com/zed/specs/

I test now Aiptek 3D twin camera affordable solution with hdmi output.

In the meantime I purchased 20 nano drones by Q4 to implement tethering.

BTW depth maps generation is faster than LIDAR's point cloud since single pass filter processes left and right eye image at image capture frequency (30fps or 60fps).

It makes no difference if 3D or stereoscopic vision is generated from single lens camera or by twin camera system

(just close one of your eyes and stay still, then move to generate 3D vision by kinetics  - single eye 3D vision (closer objects move faster, so your brain can cache a series of single eye images, detecting faster moving objects within your FOV.

So single pass filter adopted to build depth maps works fine and depth map is updated on the-fly.

Forget AI, NN, AR, VR and alike blah blah marketing.

What is hot is SelfEgo implemented into autonomous cars, drones, boats ( real intelligence implemented as opposite to blah blah AI, NN .... marketed for the last 40 years to work soon, very soon).

Comment by Global Innovator on November 14, 2016 at 7:03pm

follow-up

your video is all about VTOL tests.

I am aware of VTOL limitations and don't expect VTOL Airbus or Boeing in near future (due to materials stress).

Comment by Gary McCray on November 14, 2016 at 7:19pm

Hi Global,

You are certainly right you can generate 3D perspective with a single camera and kinetics with multiple images shot at different times, just stereo by another method.

But that isn't the point, the point is the maximum quantity and quality of useful information for the least amount of processing requirements.

3D point cloud is optimal for a computer generated perception of the outside environment.

Basically every other method simply requires more (often considerably more) processing to extract the data already found in a 3D point cloud.

Lidar generally produces a reliable high accuracy point cloud as it's output.

Other active systems like the Invensense / (original Kinect) can also produce a useful 3D point cloud, but passive systems like cameras require a very serious processing front end to extract that same data.

Sure other kinds of data can be useful, but the 3D point cloud is the most generally worthwhile and easiest to utilize.

Lidar based systems produce very accurate distance data, stereo camera systems by single or multiple means do not.

Comment by Global Innovator on November 14, 2016 at 7:46pm

"

3D point cloud is optimal for a computer generated perception of the outside environment.

Basically every other method simply requires more (often considerably more) processing to extract the data already found in a 3D point cloud.

Lidar generally produces a reliable high accuracy point cloud as it's output.

"

It doesn't matter if your save 3D point cloud as 3-D array or a gray-scale flat picture, as promoted by

https://www.stereolabs.com/zed/specs/

Either depth map, visualized as a gray-scale image or point cloud generated by LIDAR is live updated at scan frequency.

Video camera at 30fps or 60fps is really fast and depth maps are generated on the fly from multiple images (shot with single lens camera).

So depth map image represent either depth map or point cloud and basic filters work fine to detect near objects, extracting topologic objects real-time.

And what's more.

Point cloud has to be turned into depth map for obstacle detection to implement avoidance.

If you still prefer point cloud dataset, so single lens camera exactly works as 3D scanner, generating pint cloud dataset at resolution inherited from genuine resolution of camera matrix.

Btw, I am not sure if traffic control officer, exposed to laser radiation from hundreds of LIDAR equipped cars should be equipped with laser radiation protective glasses only, what about laser radiation dosimeter.

Comment by Gary McCray on November 14, 2016 at 9:01pm

Depth information from video generated depth map is no where near as accurate or granular as Lidar generated depth data and resolution greatly degrades with increasing distance where as with Lidar remains constant.

The resolution based on resolution of camera pixels is only valid for x - y data not Z (depth).

Multiple (more than 2) images can refine depth estimate, but it still never approaches accuracy of LIDAR.

And with the new short pulse LIDAR there is simply NO eye safety hazard.

For object avoidance only depth alarm is needed which can be simple proximity or change of proximity, both of which can be extracted simply from a point cloud without a full depth map.

To avoid a single object approached or approaching singly is easy, to navigate through a complex factory or a forest is not.

What they are accomplishing with cameras only is certainly impressive, but the future will. in my honest estimation, belong primarily to LIDAR with cameras providing supplementary assist.

Best,

Comment by Laser Developer on November 14, 2016 at 11:54pm

OK, I suppose I'd better add something here.

Osram is a company that specializes in light emitting components and they wouldn't be investing in this laser technology if they didn't think that it was safe and effective. The new component will be a useful addition to any LiDAR system but keep in mind that it is only a component and not a total system.

The eye hazard for these pulsed lasers is incredibly low since biological systems respond to the energy density, not the peak power. In other words, it's not the number of watts that counts but the number of watts per square meter. Since the beam is relatively wide and the number of joules is very low (less than a millijoule per pulse) the detrimental effect on skin and eyes is minuscule. For a Class1 laser system there is no detectable effect even with continuous exposure.

As for the relative merits of stereo vision versus LiDAR, in the automotive world one of the problems is that vehicles follow along (almost) the same paths. This means that "kinetic 3D" is hard to apply in many conditions such as looking straight ahead on a highway. Additionally, the scale of the environment becomes an issue when traveling at speed. Direct stereo resolution drops off exponentially with distance so below a few meters it is very accurate but this accuracy falls rapidly out to about 20m after which there is almost no reliable stereoscopic affect. In contrast, LiDAR is range agnostic with consistent accuracy out to the distance at which signal is lost. For the automotive market the requirement is to get reliable operation beyond 50m which is the stopping distance of a car traveling at 120 kph.

I personally don't think that processing power is a limitation for either stereo or LiDAR technology. In practice, a number of different sensor technologies will be combined, including laser, camera, radar and ultrasonic. This is because each has advantages and disadvantages in different conditions. The experiences of Tesla suggest that camera technology alone is not sufficiently safe.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service