0 Developer UAV DevBoard Camera Targetting Posted by Pete Hollands on April 22, 2010 at 6:30pm This is my first fight with a Sony Webbie mounted on my Twinstar 2 attempting to target a known GPS location on the ground using the gyro information from the UAV DevBoard.Camera Targeting with UAV DevBoard Tags: matrixpilot, uavdevboard, udb E-mail me when people leave their comments – Follow
I calculate the angle to target using pitch and roll there by avoiding servo twist, i have wireless comms to the aircraft (xbee type thing)and can change target on the fly.
I do the folowing polar to target > cartesian > rotation of cartesian from aircrafts pitch roll yaw (compass for yaw) also this means we get real bearing from start and no yaw drift)>translate to pitch and yaw.
the hardware is an arduIMU v2 with magenetometer.
The pitch and roll gyros are corrected for drift using the gravity vector obtained from accelerometers (which in turn are corrected for forward acceleration and centripetal force). The yaw gyros are usually corrected for drift using the GPS velocity vector. However, nowadays we also automatically calculate the wind, so we then obtain the true heading of the plane from the GPS velocity vector corrected for wind. We can optionally fit a magnetometer, which can then also be used to correct the yaw gyro drift. This has the advantage that the yaw gyro is then correct before take off, and so autonomous take offs are then possible. Best wishes, Pete (Off to do some flying).
Just clocked thie thread, superb work..
Playing with a similar idea myself.
Q? What trigger/data feed are you using?
Yaw compensation...is this corrected from the GPS?
Lastly, would you care to share your code, and perhaps i could try to port it to AP.
So can we start to do image analysis and build a 3D view of the world ? Both for improved mapping, but also potentially improved navigation ? May be we would need software stabilization as the first module in that process, before passing the pictures on for further analysis. This mean each frame would have more accurate orientation information when it is parsed onto the 3D feature extraction process.