From survey to visualization

So I had some time to look into the 3D reconstruction and visualization some more and managed to make significant improvements to the workflow. In the first results I didn't clean up points and applied the texture to the entire model in one go, straight from generation -> blender. This makes the UV atlas ( a parametrized texture, where it decides which surface gets what ) really fragmented and you end up with a very small number of useful pixels in the texture, losing about 50% of useful space. Increasing this to 2 textures doesn't really help that much.

So today I took a different approach: spend one day cleaning up the model, splitting up the data and iterate towards incrementally better results. My workflow now uses the following:

  1. Generate the dense point cloud. I used "medium"  settings, as high would get me 54M points and I don't have that much memory to post-process those results. I work with around 16M points.
  2. Delete stray points. More time spent here improves mesh results because you generate fewer surfaces that are parametrized in the UV atlas. So try to get rid of all. Make sure to delete points under and above the mesh. Try to delete points inside buildings too (which are not visible and generated in an attempt to reconstruct the roof).
  3. Remove trees and vegetation. If you want trees, you should recreate them using a 3D package. They never look very good from survey results. This does create some gaping holes in the ground, because the trees consumed ground detail from the photos.
  4. ( close gaps where trees used to be? ).
  5. Classify the data points to separate the meshes you're going to create. Separating the meshes really helps to reduce the wasted space in the UV atlas (the texture). Ideally, you want to get rid of all vertical surfaces (surfaces > 45 deg) .  Classification is done by selecting 'building' or 'ground' for each data point (unfortunately yes, each one).
  6. Select all data points and classify them as ground.
  7. Use the lasso tool to select an area around a building you want to classify and also select some points of the ground area around it. Classify them as building.
  8. From a slightly oblique view, select the points near the building where you want the building edge and move that selection outwards. Reclassify the selected ground points as ground. From 2-3 different views around the building, you now have a clear separation of 'building' and ground.
  9. Do this for all buildings.
  10. Try to remove other vertical surfaces like cars and fences.
  11. Build your mesh using only points classified as ground.
  12. Build the texture for the mesh. For the ground, the "ortho" projection works well at 8192 if you don't have many vertical surfaces.
  13. Export the ground model as obj or whatever you prefer.
  14. Build your mesh using only points classified as buildings. This replaces your mesh, but not your dense point cloud.
  15. Build the texture for this mesh too, but use "generic" projection instead and 8192.
  16. Export the "buildings" model as obj or whatever you prefer.
  17. Import buildings and ground into blender or some other 3D tool. I use 'cycles' in blender and need to "use nodes" in the material settings and select the texture.
  18. I select my "all buildings"  object and separate each building to make it a separate object instead. This allows me to hide stuff I"m not working on and makes it easier to edit results from different viewpoints. Then I activate the building I work on, the ground and usually start with a new cube that I try to fit into the building. From there I split the cube into fragments to extrude and push back surfaces and iteratively work to add more detail to the mesh, trying to make it as much as possible like the original. Obviously, the better data you have of the original, the better this works out. In a sense, the photoscan generated mesh is a 'proxy' for me to use as a source.
  19. I then delete the generated object and keep the clean building. At this time you have as much detail as you can reasonably could get so the original building is never going to be needed again.
  20. I then export each separate object, one-by-one from blender and import it back into photoscan. 
  21. It appears as a single mesh and I regenerate the texture once again for that building with the new geometry. This time I select a small texture size like 256/512/1024. That's fine for aerial visualizations, but it depends on the size of the building.
  22. When the texture is rebuilt, I export the single building from photoscan again and reimport it into blender, substituting it for the one I had that had no textures. It now has a cool new texture with the original dirt and grime and stuff.
  23. When all are done one-by-one, the scene looks amazing especially from a bit of a distance. The sharp corners look much better as well as the removal of the bubbly surfaces. It is possible though that the surface doesn't exactly fit the wall, so it may loop over to the roof or vice-versa. That's because it's always a bit of guesswork with geometry.

Further improvements can be made from there. You could decide to tweak the geometry a bit more.

It's possibly a good idea to classify the ground close to buildings not as ground but as something different. I found that if you have no ground points for the building, it sort of seems to wave upwards and you don't have a proper measure for the ground distance. You can see this happening in the video when it flies to the big building and you look at the little shed roofs near the bottom and the 2 buildings on the right. So it's a good idea to leave some ground points there. You'll want to create the mesh then with 2 groups so you get the ground points in each mesh.

Where can you take this?  Well, the guys here used a helicopter and photoscan to generate a 3D model of a spot in South Africa called Sandton City. They used that model similar to how I worked; as proxies to make their own 3D models, but they modeled the buildings the way they wanted to get better control over how it can be disassembled. It also reduces the poly count. The exact spot, size and hull are pretty well defined by photogrammetry, so you can get an artist to work out the interior bits. You can see how this works in the "making of" video.  They only took 9 weeks!

All these efforts simply dwarf by the efforts from AEROmetrex of course.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • I'm by no means an expert and I don't operate these myself, but I saw some references on how the oblique angles are produced:

    http://dev.interatlas.fr/copy-site-ia-ums2/moyens-en.html

    http://dev.interatlas.fr/copy-site-ia-ums2/ia-oblique-en.html

  • Hi Gerard,

    The mapping will be done from a plane, which is being delayed at the moment due

    to HAMMAS firing missiles into the required airspace...

    Thanks for the note as to the ground angle coverage, we might be doing that for some

    select locations in the fabric.

    From previous tests, it is very advisable to aim for very high front-to-back and side-to-side

    source imagery overlap, around 70-80%! Could you post a screen capture showing

    the "camera positions"?

    G

  • Hi Gil,

    Certainly looking forward to the results. is the mapping done with a uav or a heli/plane?

    The platform for the DSM generation; if you mean the flying platform it's the custom 35-min. endurance hexa with Canon CHDK camera and APM. If you mean the post-processing software, that's all photoscan. I'm going to play around with the model over the weekend. One of the things I still want to evaluate is the use of normal/displacement/specular maps in the scene to try to remove some of the flatness of the textures and introduce more detail. Unfortunately in this dataset I don't have ground photos, so details of vertical surfaces is relatively poor. If your project has some landmark buildings in the city area, I recommend you send someone out to take a couple of pictures around to increase the detail around the base and the area, especially if there are trees around.

  • Hi Gerard,

    Just a note,AeroMetrex are using the Acute3D Smart3DCapture solution,

    which currently is the best SfM / Photogrametric modeler out there. After much research,

    I am now leading a pilot project involving this technology together with Skyline's TerraExplorer

    set of solutions. We are shooting an urban area of approx. 3 square kilometers at 3-4 cm per pixel

    resolution. I hope to be able to post some results here, even though not truly DIY nor drone related -

    it is just a matter of geographic scale. :-)

    Given that - your results are truly impressive!!!

    What platform did you use for the initial DSM generation?

    Gil

  • Blender screenshot:

    3701786429?profile=originalYou can see the buildings that were "reconstructed" manually (cubes basically) and how other the other buildings have jagged surfaces. The ground was not touched, but it may be possible to remesh that too without losing too much detail.

  • APM 2.6.x and Mission Planner for data collection and Canon+CHDK in the camera. I used a custom script to load the APM GPS position in the log (CAM log line) into the photo EXIF data.

    From there, the workflow uses only photoscan and blender. As I said in the tuturial, I reconstructed some buildings by hand using the photoscan exported building as a template. Then I reimported the manual mesh for retexturing in photoscan.

    This version used 720p and a poor bitrate in the upload. Here's a new version in 1080p HD at much better quality:

    https://www.youtube.com/watch?v=bDohfdA_fpc

  • Nice work!

  • Hi Gerard,

    Can you list all the software you are using in the workflow to achieve this result?

    Thanks.

  • Can you post a picture of what the bad result looks like?  Then perhaps we can comment.

    To start with, you want photos where the entire scene is in focus. Some lenses are a bit bad. You must also use the appropriate aperture to do this, probably higher than F/8.0 (F/16 if possible). Definitely check the edges and check objects that are further away than the closest object in the scene and make sure it's all sharp.

    Don't rotate the camera as if it's on a tripod, because the algorithm cannot determine depth from there. You want to move with the camera around the building, so you'd typically step sideways instead. It's sometimes helpful to take pictures at different scales, so you can stand further away at around 1.5 or twice the distance and shoot the same scene again. In those cases you only need half the amount of pictures.

    Also, not sure if you noticed, but I reconstructed 3 buildings here. The main building, the little sheds and the house in the woods. The other buildings weren't touched, because it'd take longer than a day and I don't have the right geometry estimates for them.

  • Looks great ! I have a friend using Photoscan but not getting very nice results even if taking hundreds of pictures of the very same building. Is there a kind of "photogrammetry course for dummies" that would start from the very first steps in photoscan down to having a quality result like yours ?

This reply was deleted.