1.I think stitching will be something like this:You get a 3d scene which is relative to camera coordinate system.But you know the transform between camera coordinate system and your whole world scene(you control camera movement-with motors-like rotate 30 degrees to left).Just by applying camera transformation to your acquired 3d scene you obtain the current 3d scene view versus your world coordinate system.
As for the position of the camera, maybe you can use DCM algorithm from IMU to get the position of your cameras.
2.Also for stitching you may use software fusion(registration):one scene is randomly rotate until it matches another scene.
Bill,
This is more than optical flow.Optical flow gives you 2D movement between 2 video camera frames.Usually you use optical flow if you have only one 2D camera.
With 2 cameras like Kinect you get 3d views.The software implementation will be different.Each frame will be a 3d scene.You move the robot, you get another 3d scene.You control the camera movement so you know how to stitch different 3d views.You keep in memory your whole 3d scene and you can do collision detection between the walls and you desired path.
Comments
This has a $5k laser scanner on top, what is the kinect even used for in this robot? Just gestures? Why not just use the depth information?
http://www.mitre.org/news/events/tech07/3055.pdf
As for the position of the camera, maybe you can use DCM algorithm from IMU to get the position of your cameras.
2.Also for stitching you may use software fusion(registration):one scene is randomly rotate until it matches another scene.
This is more than optical flow.Optical flow gives you 2D movement between 2 video camera frames.Usually you use optical flow if you have only one 2D camera.
With 2 cameras like Kinect you get 3d views.The software implementation will be different.Each frame will be a 3d scene.You move the robot, you get another 3d scene.You control the camera movement so you know how to stitch different 3d views.You keep in memory your whole 3d scene and you can do collision detection between the walls and you desired path.
http://robotbox.net/blog/gallamine/open-lidar-project-hack-neato-xv...