Warning: Lots of hi res images!
This is my entry for T3! I intern in an NSF funded lab which uses multicopters for ecological research. More specifically: we do photogrammetry in Agisoft Photoscan to produce LiDAR like point clouds. We're based out of UMBC, which is the campus shown above. I'd already been thinking about doing a full campus scan, and when I saw that T3 was going to be based on 3D modelling I knew what to do. So here's my scan, I hope you like it!
The Rig &The Mission:
The rig is a Mikrokopter Okto framed Arducopter. Parts list available here. The specs are as follows:
It can fly safely for 30 minutes (and a max linear distance of 8 kilometers at a target velocity of 7m/s) using this setup.
The Camera is a Canon Powershot ELPH 520, mounted in a waterproof case. The case is no longer waterproof because it has been lightened. It's main function is to provide a stable and consistent mount for the camera. The case is mounted to the underside of the frame using M3 plastic standoffs and rubber vibration dampers. Because CHDK is not available for this model of camera, the shutter button is held down with a thin velcro strap. In sequential shooting mode, this results in a constant 2 still frames/second.
Due to the distances involved, the campus had to be divided into three missions of approximately 6km each. The mission specs were 100 meters above ground level (to stay well above rooftop level but also well below 400 ft.) 39m apart tracks for 75% side overlap between photos.
The flights went extremely well. Each flight was fully automatic. The only human intervention was switching to AUTO mode while on the ground, and disabling the copter once it had landed. Line of sight was maintained on the copter at all times and I was standing by to take control if necessary. Please note that the KML files do no show height properly: the height above the ground is shown as height above sea level. So the tracks are missing about 61M of height in the Google Earth representation.
This is an example of a typical image captured by the camera. In this photoset, I noticed that my pictures were somewhat motion blurred. This is likely due to the overcast lighting conditions triggering longer exposure times. In bright sunlight, images are usually sharper in my experience. Solutions to this include vibration damping the camera better, and using a higher quality camera.
The photos were georeferenced using a custom Python script and run through Agisoft Photoscan to produce 3D models.
1. I manually discarded extraneous photos. Due to the camera setup with the shutter button held down (for maximum fps), pictures were taken for the entire duration of all three flights. I trimmed out the pictures from the takeoffs, landings, and going to and from the first and last waypoint. This leaves only the pictures taken along the vertical tracks and the short horizontal connecting tracks.
2. I batch renamed the photos to 0001 through 5443.
3. Used Python to convert my telemetry files into a text file with only GPS, altitude, and waypoint flags. I downloaded the Ecosynth Aerial Pipeline, ran start_windows.bat, clicked on Point Cloud Pre-Processing, clicked on telemetry conversion sript, and ran my log files.
4. I then manually trimmed my text files down to only the start and end of the tracks. I did this using the waypoint flags and by double checking the GPS coordinates in Google Earth to make sure they were right on top of the places where my tracks start and stop. I then deleted the waypoint flags leaving only GPS coordinates and altitudes. It looked like this:
39.2553516 -76.7060103 100
39.2553536 -76.7060059 99.97
39.2553558 -76.706002 99.96
39.2553587 -76.7059985 99.97
39.2553619 -76.7059957 99.99
5. Next I ran another python script to assign GPS coordinates to each picture. Basically I took a folder for each flight, put the pictures in their corresponding folders according to flight, put the appropriate text file for each flight in the folder, and ran the script. Please note that I believe this file only works for our 2FPS Canon ELPH520 setup. It looked like this:
# <label> <x> <y> <z>
IMG_0001.JPG 39.2553536 -76.7060059 99.97
IMG_0002.JPG 39.2553587 -76.7059985 99.97
IMG_0003.JPG 39.2553653 -76.7059932 99.99
IMG_0004.JPG 39.2553733 -76.7059892 100.01
IMG_0005.JPG 39.2553829 -76.7059861 100.02
6. I then merged the resulting three text files into one large file for Photoscan ground control, and moved all the photos back into one large folder together.
7. I added my photos to Photoscan, and used the ground control screen to import my GPS coordintes and heights. I left the accuracy at 10m.
8. I ran Photoscan! Everything after this is just simple use of Photoscan according to the manual.
9. After the point cloud was procesed, I ran both a height map and an arbitrary geometry (true 3D) mesh model. Both models were very large, about 16 GB each. I made several decimated and textured models for export.
10. I exported directly from Photoscan to Sketchfab. I also made some .ply files, as well as orthophotos.
Here's a small version of the orthophoto. The full resolution version is 0.03 m resolution, meaning each pixel represents 3 centimeters. I've never been super excited about orthophotos since I work mainly in 3D, but this was easy to make so I figured why not include it.
Once you zoom in, you can see why it's called a sparse point cloud. This is a the same cloud as the previous image, but cropped down to just the library and zoomed in. Obviously, roofs and lawns get a lot more points than the sides of buildings.
Here's that same view of the library, but with the dense point cloud. A lot nicer! I had to do only the library on dense, because dense cloud processing time is prohibitively long. But I feel it needs mentioning, the entire campus could be processed to this level of detail given enough patience and a supercomputer. Notice how the points on the tan roofs and the grass are so dense as to look like a solid, but the white roofs and the sides of the building are not as dense.
Now it's time for some 3D meshes! Obviously, the raw mesh product is a prohibitively large file. So I performed decimations and cropped to smaller areas. I have Sketchfab pages and .ply files!
If you zoom in on the parkin garage, you will see that a couple of cars look transparent and ghostly. This is because the car either left or pulled in between the copter's multiple passes.
The following series of images shows each progressively more complex representation of the 3D model available from Photoscan. They're from my most complex model, screenshotted straight out of Photoscan. This model is so big that I cannot open it with any external programs, I have to decimate it.
This is just the sparse point cloud, the most basic representation. Notice some surfaces have no or few points, like the sidewalks and some roofs. This is because plain white objects have few identifiable features.
This is the wire mesh representation. I had to zoom in to make the individual polygons visible.
Now we have the solid mesh. It is like the wire mesh, but with each polygon filled in. This representation is good for examining the shape of your model without visual clues in the texture changing how the shapes appear.
Next is the shaded solid! Photoscan assigns each polygon some diffuse colors. Since the polygons in this model are so small, this gives a decent representation.
The final textured model. This is as realistic looking as it gets, for this scan.
And finally, I'd like to show some screenshots from the high quality model:
All in all, this project was a cool experience and I'm glad the T3 contest prompted me to do it. I definitely learned a few things:
Credit goes to the Ecosynth lab at UMBC (of which I am an intern) for use of their equipment to do this scan. Check us out at Ecosynth.org. Credit also goes to Jonathan Dandois (also of Ecosynth) for helping me with georeferencing the photos.