3D Robotics

Some limitations of the Kinect for SLAM

3689382239?profile=originalI Love Robotics has posted an excellent analysis of how the Kinect compares to other SLAM solutions, such as the Neato Lidar and Hokuyu lasers.

 

Here's an excerpt, but do read the whole thing. It's fascinating:

 

"The depth image on the Kinect has a field of view of 57.8°, whereas the Hokuyo lasers have between 240° and 270°, and the Neato XV-11's LIDAR has a full 360° view.

In addition to being able to view more features, the wider field of view also allows the robot to efficiently build a map without holes. A robot with a narrower field of view will constantly need to maneuver to fill in the missing pieces to build a complete map.

One solution would be to add more Kinects to increase the field of view. The main problem is the sheer volume of data. 640 x 480 x 30fps x (3 bytes of color + 2 bytes of Depth) puts us at close to the maximum speed of the USB bus, at least to the point where you are only going to get good performance with one Kinect per bus. My laptop has two USB buses that have accessible ports, and you might get four separate buses on a desktop. Assuming you down sample until it works computationally you still have to deal with power requirements, unless your robot is powered by a reactor, and possible interference from reflections to deal with.

Another simpler approach is to mount the Kinect on a servo to pan horizontally, this however reduces the robot to intermittent motion where it is constantly stopping and scanning. Depending on your robot's mechanical design you may be better off rotating the robot."

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • A few additional thoughts:

    Structured Light (Kinect) not very happy in direct sunlight.

    Laser Scanner on servo pitch platform easily used at dynamically selectable vertical resolution to produce 3D image.

    That said, Kinect also easy to put on servo controlled yaw platform to widen or select specific horizontal view.

  • Theoretically, you should be able to spin the kinect around by a brushed motor with an encoder as fast as you need, depending on the refresh rate needed, creating a 3d map by combining the processed images according to the position it was taken in(via the encoder). but, the problem is that you can't exceed the frame rate of the camera, but there is a work around, since the camera has a field of view of 57.8degrees, in order to have a 360degree view at 1 hertz you only need 7 frames per second. Therefore, with out exceeding the max 30fps you can achieve 4.28 Hertz or per 360 degrees. Comparing this to the Kinect which has only one line of resolution I think the choice is obvious, a $400 piece of crap and a laser diode or the $150 Kinect, you choose.

     

    Basically there's no limitations but instead HUGE BENEFITS can be reaped from this fine device!

     

    PS the reason I didn't suggest using software to map the images together like panorama software is speed, we want to achieve the fastest hertz possible then poll the device when we need information this is to reduce minimum processing power although the accuracy is decreased because instead by using the software panorama method we can achieve a max resolution of not 320x240 but that times pi = 241,275 (HaHa, I wish the kinect had more range! don't you?)theoretically, you guys with me on this? but if I already had a fancy vacuum cleaner and destroyed it beyond repair i would just further reverse engineer the hardware and increase the rpm thus the refresh rate and use servos to find the latitude? its latitude right? I crack myself up!

    PS- I think it uses a pot too!

     

    NOTE: If you did your research the Kinect gives you a 640×480 color video stream and a 320×240 depth stream(or Googled it)

  • PS you could do the same with the Kinect- rotate around to sweep 360 deg. There is no real reason that a quadcopter must stay in the same yaw orientation. :)

  • Why not just rotate the Neato 90 degrees to scan a vertical line, and then rotate the 'copter to sweep out a true omnidirectional (4pi steradian) view?

  • But Kinect also has 2D map depth information 57 degree on X axis, and maybe 60 degree on Y axis, whereof Neato scans a line on X axis.You need to move up-down to get 3d -perception.In a dynamic environment Kinect it's better suited.

    http://perception.In/
This reply was deleted.