An example of automated photomapping:
typical GPS precision plus turbulence gives around 10-15m precision.
After spending one minute on manual adjustments:
This is what you have after importing automatically, without any manual adjustments, IMU corrections included but the major factor is that the camera is already almost vertical and the platform is stable:
Since the protos are taken with roll-stabilised head using Pteryx UAV,
they are ortorectilinear as the shooting angle was at worst a few deg relative to vertical.
FLEXIPILOT log decoder now take camera lens angle and mounting offset angles as a parameters, generating Google Earth file with matching photos within seconds! This is particularly useful
for isolated shots, since overlapping images when mapped in quantity give an impression of 'randomized crowd'.
Another approach: Microsoft ICE and 10min processing after drag & drop:
Unfortunately the image is a little skewed.
The major benefit of stabilised head is that all photos are of acceptable geometry and useful for stitching. However, because the precise moment of taking the photo cannot be known, an important angle and positional error will always be present in the logs. Another factor to include is changing lighting and altitude, inevitable in real-life operation and during turbulent weather (this time 42km/h flight speed and 20km/h wind speed plus strong thermals affecting pitch).
Automated mapping (AerialRobotics LogDecode) applied to a linear map: