Aerial Image Processing Software and Workflows

Hi all,

I am starting this discussion to bring to together all of the different types of software people are using to process photos collected with their drones and also what they are using it for and what limitations they are finding.

I am currently using the following software:

Agisoft Photoscan Pro

-The data I am running through it was not collected for photogrammetry so I am having some difficulties

-Some images are collected in winter with on the ground making it harder for photoscan to find matching points

-Some images do not have enough overlap or not good enough quality

-Images over forested landscapes sometimes have problems finding matching points

Microsoft ICE

I have using this for quick stitching of images with too little overlap for Photoscan.

I have found it does not work well for long linear set of images

Google Earth

I use google earth to find coordinates for ground control points or georeferencing.

QGIS

Is an open source GIS software that I use for georeferencing stitched or single images and creating data from the images

I also do a lot of work with LiDAR data and as such am very interested in classifying the point cloud that I can create with Photoscan. The new version has a tool for classifying ground points and then allowing you manually sort the rest. But I am also interested in using the one thing the advantage that photogrammetrically derived points clouds have over raw Lidar data and that is point cloud colours. I am interested in creating a work flow to classify orthos created into feature types (as can already be done) and then assigning these feature type to the point cloud that is also created.

thanks,

jarrett

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Hi, there's a new online service for image processing:

    http://www.agrocam.eu/image-processing

    It seems costless.

  • I just like to add this link (below), Image Mosaicking with GDAL

    http://linfiniti.com/2009/09/image-mosaicking-with-gdal/

    There is also QGIS plugin right now, "DronePlanner"

    http://comments.gmane.org/gmane.comp.gis.qgis.devel/30810

    Hopefully, QGIS can do mosaicking, georectification and orthorectification in the future. Anybody knows what is needed to do these processes in QGIS (e.g. flowchart, etc.), so people who might be able to develop a QGIS plugin for mosaicking UAV Aerial Photos.

  • Does anyone have any tips for adding oblique imagery into the photoscan workflow? Or alternatively how to capture imagery where the verticals have adequate detail.

     have an area that I covered shooting nadir imagery, and it stitches together reasonably well, but it's missing details on the sides of buildings. I flew around them shooting obliquely from a low altitude to try and fill it in, but adding these in  seems to degrade the final result rather than improve it. All images were geotagged using MP, and captured in the same flight session.

    I tried the same data set (nadir only) with VisualFSM but didn't get particularly good overall results either.

    • Thanks, so on this run, the verticals were shot from 40m, and the obliques were shot from around 13m at a 45 degree angle. Should the obliques have been shot from 40m also?

      I've just put the verticals through VisualFSM/Meshlab now and am getting a reasonably good looking model, but the heights of buildings didn't really come through well, and there's an overall convex curve to the ground which should be flat. Looking at it top down it looks pretty good though.

      i-r6jczCM-M.png

      i-bcchJHV-M.png

      I'll see how putting the data through Pix4Dmapper goes once I get my trial sorted.

      • Yeah I did use the Poisson reconstruction through meshlab, I was following your video on it.

        I'm going to go and reshoot this set with a larger coverage area and more vertical overlap (~70 vs 50%) shortly and seeif it helps things much. Oh and to switch to jpeg rather than raw this time... was wondering why it was shooting so slowly before...

        Do you have an example of the command line you'd pass through to PoissonRecon?

        • There's another video showing a workflow for 3D engine meshes that uses this new tool:

          https://www.youtube.com/watch?v=Yt9MmQHobTI

          You don't have to reduce the point cloud density that much as in this video, but of course with many more points the poisson runs slower. It's using solverDivide 7 and depth 10 in this case. You can increase them to make the cloud finer. The final mesh does not use the original points I think, so they're relocated along the calculated mesh. You do need point normals. You should check for non-manifold points later, it seems to leave some points/edges that are non-manifold sometimes.

          In my end results you see the blobbyness as well. That's because that vegetation got only a very small number of points, so it's building a 'guess' hull around those points. If you want sharper edges, more points are needed.

          If you want things to look better, I'd go into cloudcompare and work on the points there before poisson. You could try resampling around building edges to increase point density there.

      • Did you use the poisson reconstruction from meshlab?  I think I remember that that particular implementation has this tendency to curl over like you see here, it has a bias towards convex hulls, so flat surfaces suffer a lot. I've been using the Poisson reconstructor available here, which has this tendency a lot less:

        http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version5.71/

        The blobbiness of the buildings and vegetation is due to the same. This implementation here allows you to set some parameters that determine how aggressively it follows edges and corners versus smoothing the surface (I think it was pointWeight).

        In these scale invariant algorithms you get best results if you use scale /2 or *2, because that's how the features are built internally. So from 40m high, if you fly @ 20m you may get better results. Some empirical results are necessary though, because these detectors have subpixel accuracy, so it's possible there's no measurable difference.

        Screened Poisson Surface Reconstruction (V5.71)
    • Hmmm.. The Photoscan nor the Visual SFM shouldn have any problems with the obliques. I have test it myself, in aerial and close range images. Visual SFM is a bit more sensitive with them. Anyway, provided that there is adequate coverage between vertical and oblique images you shouldn't have any problems. 

      Nevertheless if your obliques are near vertical and with different scale (ie closer) than the verticals, then it is normal that these programs cannot cope with the data. 

      Is such cases you take vertical, near vertical obliques (ie 60-70 deg) and obligues (45 deg) photos, in order to ease the transition and allow the software (SIFT or whatever) to find common/tie points across the images. 

      D

  • Jarrett,

    I use Photoscan Pro and it works superbly. When we were deciding what to get I evaluated VisualSFM and Pix4UAV as well. Visual SFM produces nice 3D products but has no native mosaic export feature and does not georeference natively. It is free, but those 2 items disqualified it for me immediately.

    With Pix4UAV it worked well, but I was consistently happier with the quality of reconstruction and mosaic creation that I was getting from Photoscan. That said, it was their old version so they may have made significant improvements since then. Anyway, better quality from Photoscan coupled with a much, much lower price (academic price is $550 vs more than 3x more for Pix4UAV at the time).

    Either way, if you plan on doing much with any of these programs you'll need some serious computing power. Large datasets (500+ images) require some hefty processing power and RAM. We run a workstation with 64 GB of RAM, dual Xeon processors, and a top-of-the-line graphics card. Working with datasets of 1000 images now is getting us to double our RAM to 128 GB.

    Max

    • Dear Max, indeed Pix4D has made some huge improvements in its new version of the software: Pix4Dmapper. 

      Among those significant changes: The quality of outputs have improved, the rayCloud allows to increase the quality and accurate measurements.

      See all the new features here: http://pix4d.com/products/

      And the pricetable, with a solution for every project size & budget: http://pix4d.com/buy_rent/

This reply was deleted.

Activity

DIY Robocars via Twitter
Friday
DIY Robocars via Twitter
RT @_JonMyer: 🏎 We need your help in naming my #AWS DeepRacer Underground Track 🏎 @davidfsmith's track is going to be referred to as 𝗔𝗪𝗦 𝗗…
Oct 22
DIY Robocars via Twitter
RT @gclue_akira: 仕事もおわったし、電脳化にとらい #マリオカートハック https://t.co/4IU90hCLgm
Oct 22
DIY Robocars via Twitter
This is the hashtag to follow all the people in Japan who are hacking the new Nintendo Mario Kart with the real AR… https://twitter.com/i/web/status/1319281354664677376
Oct 22
DIY Robocars via Twitter
RT @gclue_akira: @chr1sa @diyrobocars jupyter mario https://t.co/P0oHdjlCRq
Oct 22
DIY Robocars via Twitter
RT @_JonMyer: 🚨Our 1st AWS DeepRacer Community Race is Underway on http://twitch.tv/aws 🚨 Join us to find out which member of your commu…
Oct 20
DIY Robocars via Twitter
Oct 19
DIY Robocars via Twitter
RT @BackyardRobotcs: After an admittedly long wait, the new Tinymovr R3.3 motor controller is now available https://tinymovr.com 🥳
Oct 19
DIY Robocars via Twitter
RT @_JonMyer: 🏎 1 hr left before the #AWS #DeepRacer Community 🏎 Top 5 Race tomorrow LIVE on Twitch 1st Place = $50 Amazon GC or AWS Credit…
Oct 19
DIY Robocars via Twitter
Oct 16
DIY Robocars via Twitter
Oct 15
DIY Robocars via Twitter
Oct 14
DIY Robocars via Twitter
RT @davidfsmith: Race virtually with The AWS DeepRacer Community Race and then join us on the track to see how your times compare on the ph…
Oct 14
DIY Robocars via Twitter
Oct 14
Hiroki Tanaka liked Hiroki Tanaka's profile
Oct 13
DIY Robocars via Twitter
RT @breadcentric: It's now!
Oct 13
More…