This is my first fight with a Sony Webbie mounted on my Twinstar 2 attempting to target a known GPS location on the ground using the gyro information from the UAV DevBoard.
I have been working on the same project as you but have taken a different route.
I calculate the angle to target using pitch and roll there by avoiding servo twist, i have wireless comms to the aircraft (xbee type thing)and can change target on the fly.
I do the folowing polar to target > cartesian > rotation of cartesian from aircrafts pitch roll yaw (compass for yaw) also this means we get real bearing from start and no yaw drift)>translate to pitch and yaw.
Mike, I'm shooting continuous video at the moment from launch of the plane. However, the software does support a trigger signal to a servo from within the waypoints definition file. The camera servos are being driven by the Direction Cosing Matrix, which is rotating the vector of a ground based target (Earth reference) into the direction of the target in relation to the plane's orientation (Plane Reference), and then calculating the servo deflection. This is being done 40 times / second as part of the main routines that read the gyros information and keep the plane stabilized. The code is all published as part of MatrixPilot. See here for MatrixPilot camera code.
The pitch and roll gyros are corrected for drift using the gravity vector obtained from accelerometers (which in turn are corrected for forward acceleration and centripetal force). The yaw gyros are usually corrected for drift using the GPS velocity vector. However, nowadays we also automatically calculate the wind, so we then obtain the true heading of the plane from the GPS velocity vector corrected for wind. We can optionally fit a magnetometer, which can then also be used to correct the yaw gyro drift. This has the advantage that the yaw gyro is then correct before take off, and so autonomous take offs are then possible. Best wishes, Pete (Off to do some flying).
Bryan, I spent some time re-writing flan.pyw and so have not been doing any further camera work for now. Essentially the software works. We also have a new maths library which can make the maths and gyro more accurate. The main issues for me are around a) having a better camera in the plane, b) improving the accuracy of the pan servo.
Congrats Pete, on the camera targeting code. Thanks also for posting the link to Cinelerra, I'm going to have to see if I can get it working on my Mac :).
@Curt. That is a really interesting library. Thanks for point it out. I think there are much more exciting things to think about. We now have 30 frames / second of HD Video, where the camera knows it position to within 2-3meters and potentially it's orientation in 3D to within a couple of degrees (and possibly 0.5 degrees soon).
So can we start to do image analysis and build a 3D view of the world ? Both for improved mapping, but also potentially improved navigation ? May be we would need software stabilization as the first module in that process, before passing the pictures on for further analysis. This mean each frame would have more accurate orientation information when it is parsed onto the 3D feature extraction process.
I've been toying around with some ideas for doing real time digital stabilization in software using OpenCV ... but so far only pondering some different ideas in my head. I wonder if I managed to put something together if this would be of any interest to the hobby uav community or perhaps there are already enough products available to do this sort of thing that I shouldn't even waste my time.
Comments
I calculate the angle to target using pitch and roll there by avoiding servo twist, i have wireless comms to the aircraft (xbee type thing)and can change target on the fly.
I do the folowing polar to target > cartesian > rotation of cartesian from aircrafts pitch roll yaw (compass for yaw) also this means we get real bearing from start and no yaw drift)>translate to pitch and yaw.
the hardware is an arduIMU v2 with magenetometer.
The pitch and roll gyros are corrected for drift using the gravity vector obtained from accelerometers (which in turn are corrected for forward acceleration and centripetal force). The yaw gyros are usually corrected for drift using the GPS velocity vector. However, nowadays we also automatically calculate the wind, so we then obtain the true heading of the plane from the GPS velocity vector corrected for wind. We can optionally fit a magnetometer, which can then also be used to correct the yaw gyro drift. This has the advantage that the yaw gyro is then correct before take off, and so autonomous take offs are then possible. Best wishes, Pete (Off to do some flying).
Just clocked thie thread, superb work..
Playing with a similar idea myself.
Q? What trigger/data feed are you using?
Yaw compensation...is this corrected from the GPS?
Lastly, would you care to share your code, and perhaps i could try to port it to AP.
Excellent work...
Mike.
So can we start to do image analysis and build a 3D view of the world ? Both for improved mapping, but also potentially improved navigation ? May be we would need software stabilization as the first module in that process, before passing the pictures on for further analysis. This mean each frame would have more accurate orientation information when it is parsed onto the 3D feature extraction process.