It is a little off topic but I thought many of the forum goers might find the Photosynth to 3D process I've been working on useful for creating 3D models from aerial photos.  The process is pretty simple and FREE:

·         Gather images with a lot of overlap (like most of our aerial shots)

·         Upload the files to http://www.photosynth.net (or use the open source Bundler app http://phototour.cs.washington.edu/bundler/ )

·         Use this exporter to extract the point cloud http://pspcexporter.codeplex.com/

·         Use a product like Meshlab  http://www.meshlab.net/ (hard to get good results) or VrMesh Studio ( http://www.vrmesh.com/products/overview.asp  ) to generate a mesh surface from the point cloud.

 

Here are two examples from my work in Ecuador and West Texas:

https://www.youtube.com/user/mdwillis01#p/a/u/0/2-oK5lnNA-I

and

https://www.youtube.com/user/mdwillis01#p/u/6/nJgvLll57f0 (only part of this one was done with Photoysnth).

The images were captured from kite and balloon platforms but the same workflow should work for any series of photographs.

Also, here is a Google Earth file with some of the same data http://70.114.146.89/~mwillis/Puchara_Grande.kmz (~10 Mb).  My linux box connection is slow. It'll take a bit to download.

-Mark

Views: 10070

Reply to This

Replies to This Discussion

Nice work, thanks for sharing this.
I wouldn't say it is off-topic at all, there has been a lot of talk about it lately. The ability to produce georeferenced photo mosaics and even surface height measurements is what turns a bunch of pictures from the air into useful data. Something that has commercial potential and a great application for small UAVs.
/Steve
Does the software only handle nadir-looking (i.e., looking straight down) images? Or can it also handle images taken at an angle? Very cool, nevertheless. What file type of images are acceptable?
From what I remember of Photosynth, images from any angle should work as long as you have plenty of them and they overlap sufficiently. It basically picks out identifiable points that are common to multiple photos, and then uses the different distances, angles and relative sizes of the points in the images to work out the 3D arrangement of the actual points on the real object, which it generates a point cloud for. From there, it's possible to use the points to generate faces, and a 3D model of some kind. I thought of using Photosynth for that purpose the moment I saw it, but it's definitely way beyond my skills, so I've more or less been waiting for someone else to do it :-P

I have to say, the results are very impressive - the model looks great; a really good resolution. How many photos did it take to get it that good?
Calls for POST OF THE MONTH AWARD...
As an prehistorian archaeologist myself, I can only say: great job with accessible means. I work in a region for which a sub-1m Lidar DEM is available for day-to-day work, so I know how useful a good DEM is. If I wouldn't have that, I would follow your way of doing, though I would prefer to take my shots from my ArduPilot airframe ;-) Keep going with the good work!
@ssk320 The process works with photographs taken in any direction/rotation/skew as long as there is lots of overlap in photographs. What's also nice is that you can use images from multiple cameras, the cameras do not need to be calibrated, and changes in internal geometry don't seem to effect the results.

If you poke around on my youtube channel you can see the process used on several rock art sites (non-aerial) and get an idea of some of the other possibilities.

As for types of images that are acceptable, as far as I know, all standard image formats are supported except for RAW. My images were JPEG with minimum compression.

@Nicholas Budd The downside is that you need a large number of photographs to get a dense mesh. My example used 410 photographs. I haven't tested my UAV yet but I'm guessing the number of photos taken for a similar area are much fewer using a UAV. Great thing about a kite, you can fly it as long as the wind is blowing and get a lot of photos.

Thanks everyone for the kind compliments. I believe this technique, or some new variation on it, has a lot of potential.

-Mark
I regularly come home with 600 images from a flight so getting images will not be an issue ;-)

Maybe you will have to plod back and forth over the area for longer.
Now your talking that would be flipping handy
@Michael, I have printed 3D hardcopies of petroglyph models using a commercial service ( 3darttopart.com ) with great success. The 3D model can be exported in a number of formats that should be readable by RepRap (which I'm not that familiar with). Making the model "watertight" might be a little complicated but I'm sure it's doable. Would be fun to see...
Hey Mark,
Great post, can't wait to grab some images and try it.

I'm at work and havn't had a chance to poke thru all your links, so forgive me if these questions are answered else where, but I was wondering if you could offer a few hints on acceptible overlap when grabbing images and what kind of resolution/accuracy your getting. Also, I have the ability to log the roll and pitch attitude of the plane when the images are taken. Is that info needed to get the best results? Is there a way to feed that info to the application or should I pre correct the images using Hugin or some other tool.

Thanks for sharing!

Brian
Actually, there's something I have filed away in my 'Projects' bookmarks folder that might come in handy here. Take a look at this Cambridge University project (the YouTube video in particular) and see what you think. It's a similar kind of thing, and if it's at all possible to get that kind of thing running on something that can be carried by a UAV, or using a video feed beamed down from a UAV, then it would be seriously awesome for realtime environment mapping. It'd also be incredibly useful for generating part models for things like the RepRap.
Brian, I haven't played around with the amount of overlap needed too much. It seems the more photographs of a location you throw at it, the higher the point cloud density. Also, it's worth noting that the detection algorithm works best on subjects with a lot of texture (like most aerial photos) but is horrible with those that don't (like glass or a shiny car). I used this PDF as a guide for my ground based work.

No telemetry data about the location of the camera is needed. I don't preprocess the images in anyway but some are trying it with varied results.

Reply to Discussion

RSS

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2020   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service