Making a 3D SBS video from a set of 2D images

In a previous post on diydrones I showed how to make a mesh model for visualization or games from a landscape surveyed by a drone. That workflow started off with the set of bare 2D images and used VisualSfM and CMPMVS to generate the point cloud and camera parameters.

VisualSfM and CMPMVS take a long time to run though and do not always produce the best end results. The commercial alternatives, like pix4d/agisoft/menci/etc. also generate parameter files that contain the camera positions and, with a bit of processing, may produce better point clouds. I wanted to find out if I could script those results and get similar outputs, so I wouldn't have to depend on my own runs to start working in 3D. At least it would take less time to process.

I added some excitement by rendering a 3D SBS video of the end results. Stuff used: pix4d output files and parameters, a custom script to generate bundler.out and list.txt files, meshlab, cloudcompare and blender.

Meshlab expects the camera parameters as a rotation matrix with a translation applied afterwards, not the actual camera position. The geocal file from pix4d contains the position however. A bit of math though shows that T=[-RC] (translation=[-rotation*position], so this is easily derived. The rotation matrix R is calculated from the euler angles, as the matrix in the metriccal file uses a different convention for the axises.

The texturing of the mesh was done as in the other example, but this time I used the point cloud from pix4d only, slightly tweaked by cloudcompare. The pix4d point clouds are in UTM projection, so I had to apply a global shift and then remove that shift manually, bringing the point cloud to some local coordinate system for easier processing. If not, the numbers are too big for visualization. The same global shift was later applied to the camera positions in the custom processing script.

In the end I only had to import the OBJ into blender, set up a stereo vision camera rig, define a path for the camera to follow, join the stereo rendered images together in 3D Side-By-Side fashion and render an animation from the image strip.

It's clear that the level of detail is really low, both in texture and geometry, but that's because the point cloud was downsampled a bit too far and the poisson surfacing was not very aggressive in maintaining angles. It's possible to get better results by maintaining high point density, apply the poisson a bit more aggressively and subdivide the mesh. The most important issue here though was that the overlap was very low. The source material must be of high quality to begin with. Trees and such are flat and projected on the ground because there was far too little information to reconstruct them. Trees with leaves become hunks of goo due to the poisson surfacing. Those are issues that need to be tweaked by hand.

From here, the sky is the limit: import the APM log and refly the mission on this virtual terrain, add some trees, remodel the mesh with primitives to get better looking results, add sky boxes, animate stuff, script this with the blender game engine and make it navigable using the Oculus Rift, use any other game engine to import the landscape and create interface apps that "do" stuff in the real world. For example: plan your mission virtually by flying through it in 3D, then collect the curve and build a list of waypoints.

This experiment shows that it's possible to get miles ahead in a 3D visualization project when you've surveyed the area by a drone first. It takes 4 hours to collect the data, a day to process them 'in the cloud' and only half an hour to get your OBJ from the end results. That means you can have a 3D model of the area ready for remodeling and editing in 2 days.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Developer

    @Gerard: I'm sorry, I probably should have tested on a desktop (my tablet didn't showed the "gear", as I mistakenly expected). Comment removed.

    Great video (now at the desktop).

  • @Arthur: Should be available for you under the settings icon (gear). 3D on/off and an "options" link.

  • T3

    Gerard, that's brilliant! 

    I'll definitely have a closer look at Blender.

This reply was deleted.