WHAT: I am trying to work out what it would take and whether it is feasible to put a sensor like the Kinect or PrimeSense variant onto an Arducopter.
WHY: Need a low cost quadrocopter AI research platform for Machine Learning, Semantic Environments and Vision
I know similar topics have been posted @ DIYDrones many times and appreciate that in order to do anything like SLAM + ROS would require extra horsepower/extra board.
What I haven't managed to find is a solution which addresses how to mechanically mount such a sensor onto the Arducopter and issues like:
I know there are plans to miniaturize the kinect sensor for use in phones but when? Alternatively
the Pixhawk platform is designed with vision in mind and 'comes' with the necessary sensors although I am not sure if it is available as a kit to purchase (or how easy it would be to assemble it my self)
I am a software dev and do not have much mechanical engineering experience, so the above issues are a bit of a challenge for me.
Penny for your thoughts.
I don't think it should be too difficult to get your project flying. The flight stability is no problem, and as long as you keep the weight down it's shouldn't be hard.
Here's how I would do your project... I'd use a Gumstix computer to run the cameras. Check out the data sheet on their camera modules. They have support for stereo vision on the camera modules. Don't know how much of this is software implemented, but the hardware support looks pretty slick. Then I'd use a Paparazzi autopilot (LISA), which connects directly to the Gumstix Overo board. Otherwise you could use the ArduPilot, but I'm not sure about integrating it with the Gumstix.
Using the Kinect might limit you too much. You probably know that the spacing between the cameras determines your depth resolution and effective distance. So if you want to fly at a safe altitude and/or see very much you'll probably need to mount your cameras further apart than the Kinect, which is made for short range applications where objects are near the cameras.
IIRC there are some open source machine vision projects that might be a good place to start. Having a tiny linux computer with enough power to process two video streams (or an interleaved stereo stream) is going to be key. The gumstix should be exactly what you're looking for. It has been said that they're planing a daughter board similar to the RoboVero specifically designed for UAVs. It would essentially be another Paparazzi board that interfaces to the Gumstix, but will be mass produced by them so it will be cheap. The RoboVero is also already cheap ($99) and has an onboard IMU and servo outputs. Paparazzi could be easily ported to it as they already have LPC processor based code.
The Pixhawk hardware is platform agnostic. ETH was using off the shelf quads for the SLAM projects and installing their own electronics. The off the shelf quads were still many thousands of dollars.
ETH had a group buy on Pixhawk hardware a year ago. It took almost 6 months to get it and they haven't had one since. They take a lot of knowledge to get running in my experience.
You can mount a kinect to a quad copter with a simple piece of bent aluminum. Center of balance is very accommodating on quads. I dislike the kinect as it relies on faint, projected infrared dots that wash out in bright light. It doesn't work outside in the light of day.
Were you able to achieve what you mentioned in the question?? Did things work out perfect?? I am planning to work on a similar project and any inputs would be great.