There seems to be a lot of information out there to get someone from zero to flying with a triggered camera and then from images to processed output. But there is a gap in what to do in between. 

There are several parameters we all mess with when setting up a mapping flight (sidelap, overlap, altitude, ground speed) I would like to hear if anyone has good rules of thumbs on how to get efficient data collection and any pitfalls they have come across. 

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • T3

    Hi Brady,

    sidelap, overlap and the resulting ground resolution are the driving factors. Camera trigger distance is a function overlap, ground speed and altitude. Another factor, especially at altitudes < 50m a.g.l., is the camera shooting delay.

    For example, if you fly with 4m/s and your camera trigger distance is 8m you camera should be able to take an image each 2 sec. If you fly higher you can fly faster and thus can set camera trigger distance to higher values. But this has a negative effect on ground resolution.

    The Tower App has a nice and intuitive interface for mission planning (although you have to set WPNAV_SPEED manually).

    For stitching 60% sidelap and overlap should be sufficient. For generating digital surface models and orthomosaics up to 85% over- and sidelap can be required. Under ideal conditions 75% and 65% can be sufficient. But in general I use 80% and 70-75%. More overlap is required if you have a hard-mounted camera and deviations from the flight path as well as from nadir view due to gusty wind.

    You find some guideline for generating orthomosaics based on UAV data at http://mavis.bitmapping.de/guidelines/ 

    Best regards,

    Thorsten

    PS: Mavis is currently in beta testing and will be announced soon

  • Good morning,

    I am very interested in this. I have two models that currently work fairly well:

    1) Set the camera on a 3 second interval and "over shoot".

    2) 60% side and forward overlap

    #2 is working pretty well but going with #1 is appealing. The downside of #1 is volume of imagery. The upside is lower risk of gaps.

    -David

This reply was deleted.

Activity

DIY Robocars via Twitter
RT @donkey_car: Human-scale Donkey Car! Hope this makes it to a @diyrobocars race https://www.youtube.com/watch?v=ZMaf031U8jg
Saturday
DIY Robocars via Twitter
Saturday
DIY Robocars via Twitter
Jun 16
DIY Robocars via Twitter
RT @GrantEMoe: I won my first @diyrobocars @donkey_car virtual race! Many thanks to @chr1sa @EllerbachMaxime @tawnkramer and everyone who m…
Jun 13
DIY Robocars via Twitter
RT @gclue_akira: JetRacerで自動走行したコースを、InstantNeRFで再構築。データセットは別々に収集 #jetracer #instantNeRT https://t.co/T8zjg3MFyO
Jun 13
DIY Robocars via Twitter
RT @SmallpixelCar: SPC 3.0 Now the motor also works. This car is doable. I just need to design a deck to mount my compute and sensors. http…
Jun 13
DIY Robocars via Twitter
RT @SmallpixelCar: My new car SPC 3.0. https://t.co/CKtkZOxeNQ
Jun 7
DIY Robocars via Twitter
RT @SmallpixelCar: High speed at @diyrobocars thanks @EdwardM26321707 for sharing the video https://t.co/o4317Y2U1S
Jun 7
DIY Robocars via Twitter
RT @SmallpixelCar: Today at @RAMS_RC_Club for @diyrobocars. Used @emlid RTK GPS and @adafruit @BoschGlobal IMU. Lap time 28s https://t.co/R…
May 28
DIY Robocars via Twitter
May 15
DIY Robocars via Twitter
May 14
DIY Robocars via Twitter
May 13
DIY Robocars via Twitter
RT @f1tenth: Say hi to our newest #F1TENTH creation for @ieee_ras_icra next week in Philly. It’s going to be huge! 😎 🔥 @AutowareFdn @PennEn…
May 13
DIY Robocars via Twitter
May 11
DIY Robocars via Twitter
May 8
DIY Robocars via Twitter
RT @SmallpixelCar: Noticed my car zigzagged in last run. It turned out to be the grass stuck in the wheel and made the odometry less accura…
May 8
More…