There seems to be a lot of information out there to get someone from zero to flying with a triggered camera and then from images to processed output. But there is a gap in what to do in between.
There are several parameters we all mess with when setting up a mapping flight (sidelap, overlap, altitude, ground speed) I would like to hear if anyone has good rules of thumbs on how to get efficient data collection and any pitfalls they have come across.
I am very interested in this. I have two models that currently work fairly well:
1) Set the camera on a 3 second interval and "over shoot".
2) 60% side and forward overlap
#2 is working pretty well but going with #1 is appealing. The downside of #1 is volume of imagery. The upside is lower risk of gaps.
sidelap, overlap and the resulting ground resolution are the driving factors. Camera trigger distance is a function overlap, ground speed and altitude. Another factor, especially at altitudes < 50m a.g.l., is the camera shooting delay.
For example, if you fly with 4m/s and your camera trigger distance is 8m you camera should be able to take an image each 2 sec. If you fly higher you can fly faster and thus can set camera trigger distance to higher values. But this has a negative effect on ground resolution.
The Tower App has a nice and intuitive interface for mission planning (although you have to set WPNAV_SPEED manually).
For stitching 60% sidelap and overlap should be sufficient. For generating digital surface models and orthomosaics up to 85% over- and sidelap can be required. Under ideal conditions 75% and 65% can be sufficient. But in general I use 80% and 70-75%. More overlap is required if you have a hard-mounted camera and deviations from the flight path as well as from nadir view due to gusty wind.
You find some guideline for generating orthomosaics based on UAV data at http://mavis.bitmapping.de/guidelines/
PS: Mavis is currently in beta testing and will be announced soon