(Scrap this, as after looking at the SLAM page, it looks like they used the Hokuyo scanner . It generates the same sort of image as the images in the videos is you have a spare 350 grames available and $5500 then indoor lidar is for you. That said I wonder how feasible a pure camera based point clound generator would be.)
I am listening to the podcasts and just came across the interview with Bill. I was interested in the MIT GPS denied indoor drone and the guys' comments on the 'LIDAR' . They were impressed with the miniaturization of the LIDAR to work in a quadcopter'
If I got the comments in the podcasts right, Bill said that MIT were processing the lidar info on the ground and sending up the data to the drone to calculate paths and routes.
I had a look at the video and I think that the MIT guys were using two static (not rotating) line lasers of 120 degrees each and viewing that with the camera, and processing the camera view to get distance info based on the trig of the location of the laser line. They look like they are using 3 cameras in a line across the top of the lasers, so maybe their system is based on sterographic analysis as well (although if they are using lasers in line with the cameras, I am not sure how that would work, normally an offset is required.)
What was interesting also, was to see the similarity between the point cloud generation video for the MIT guys and Carnegie Mellon vids for the Grand Challenge. Also vibrations from the quad give a small vertical range to the scan so they don't have to use a mirror. Also, the tilting of the quad creates a scan that moved up and down walls too.
http://groups.csail.mit.edu/rrg/videos.html
http://openslam.org/gmapping.html
Any thoughts?
Cheers
Diarmuid
You need to be a member of diydrones to add comments!
Replies
http://rp181.fortscribe.com/?cat=108