T3

Octocopter Scan of UMBC (T3 Entry)

3689570051?profile=original

Warning: Lots of hi res images!

This is my entry for T3!  I intern in an NSF funded lab which uses multicopters for ecological research.  More specifically: we do photogrammetry in Agisoft Photoscan to produce LiDAR like point clouds.  We're based out of UMBC, which is the campus shown above.  I'd already been thinking about doing a full campus scan, and when I saw that T3 was going to be based on 3D modelling I knew what to do.  So here's my scan, I hope you like it!

 

The Rig &The Mission: 

800px-Octocopter.JPG

The rig is a Mikrokopter Okto framed Arducopter.  Parts list available here.  The specs are as follows:

  • 12" APC Props
  • MK3638 Motors
  • jDrones 30A ESCs
  • jDrones Power Distro Ring
  • Mikrokopter Okto XL Frame
  • Mikrokopter Hilander Landing Gear
  • APM 2.5 running 2.9.1b
  • 3DR Telem Radio
  • Spektrum AR7000 + DX7S
  • Garmin Astro GPS Dog Tracker
  • Ziploc Tupperware Dome
  • Four Parallel 5000mAh 4S Lipos

It can fly safely for 30 minutes (and a max linear distance of 8 kilometers at a target velocity of 7m/s) using this setup.

3689570001?profile=originalThe Camera is a Canon Powershot ELPH 520, mounted in a waterproof case.  The case is no longer waterproof because it has been lightened.  It's main function is to provide a stable and consistent mount for the camera.  The case is mounted to the underside of the frame using M3 plastic standoffs and rubber vibration dampers.  Because CHDK is not available for this model of camera, the shutter button is held down with a thin velcro strap.  In sequential shooting mode, this results in a constant 2 still frames/second.

3689570123?profile=originalDue to the distances involved, the campus had to be divided into three missions of approximately 6km each.  The mission specs were 100 meters above ground level (to stay well above rooftop level but also well below 400 ft.)  39m apart tracks for 75% side overlap between photos.

Download Mission Planner Files.

3689570176?profile=original

The flights went extremely well.  Each flight was fully automatic.  The only human intervention was switching to AUTO mode while on the ground, and disabling the copter once it had landed.  Line of sight was maintained on the copter at all times and I was standing by to take control if necessary.  Please note that the KML files do no show height properly: the height above the ground is shown as height above sea level.  So the tracks are missing about 61M of height in the Google Earth representation.

Download the Google Earth File Shown Above.

Download the Raw Dataflash and Tlogs.

3689570189?profile=originalThis is an example of a typical image captured by the camera.  In this photoset, I noticed that my pictures were somewhat motion blurred.  This is likely due to the overcast lighting conditions triggering longer exposure times.  In bright sunlight, images are usually sharper in my experience.  Solutions to this include vibration damping the camera better, and using a higher quality camera.

3689570144?profile=originalIn total, 5443 useful pictures were used in the scan.  Additional pictures from the ascent, landing, and going to and from home were discarded.

 

 The Workflow:

The photos were georeferenced using a custom Python script and run through Agisoft Photoscan to produce 3D models.

1.  I manually discarded extraneous photos.  Due to the camera setup with the shutter button held down (for maximum fps), pictures were taken for the entire duration of all three flights.  I trimmed out the pictures from the takeoffs, landings, and going to and from the first and last waypoint.  This leaves only the pictures taken along the vertical tracks and the short horizontal connecting tracks.

2.  I batch renamed the photos to 0001 through 5443.

3.  Used Python to convert my telemetry files into a text file with only GPS, altitude, and waypoint flags.  I downloaded the Ecosynth Aerial Pipeline, ran start_windows.bat, clicked on Point Cloud Pre-Processing, clicked on telemetry conversion sript, and ran my log files.

4.  I then manually trimmed my text files down to only the start and end of the tracks.  I did this using the waypoint flags and by double checking the GPS coordinates in Google Earth to make sure they were right on top of the places where my tracks start and stop.  I then deleted the waypoint flags leaving only GPS coordinates and altitudes.  It looked like this:

39.2553516 -76.7060103 100
39.2553536 -76.7060059 99.97
39.2553558 -76.706002 99.96
39.2553587 -76.7059985 99.97
39.2553619 -76.7059957 99.99

5.  Next I ran another python script to assign GPS coordinates to each picture.  Basically I took a folder for each flight, put the pictures in their corresponding folders according to flight, put the appropriate text file for each flight in the folder, and ran the script.  Please note that I believe this file only works for our 2FPS Canon ELPH520 setup.  It looked like this:

# <label> <x> <y> <z>
IMG_0001.JPG 39.2553536 -76.7060059 99.97
IMG_0002.JPG 39.2553587 -76.7059985 99.97
IMG_0003.JPG 39.2553653 -76.7059932 99.99
IMG_0004.JPG 39.2553733 -76.7059892 100.01
IMG_0005.JPG 39.2553829 -76.7059861 100.02

6.  I then merged the resulting three text files into one large file for Photoscan ground control, and moved all the photos back into one large folder together.

7.  I added my photos to Photoscan, and used the ground control screen to import my GPS coordintes and heights.  I left the accuracy at 10m.

8.  I ran Photoscan!  Everything after this is just simple use of Photoscan according to the manual.  

9.  After the point cloud was procesed, I ran both a height map and an arbitrary geometry (true 3D) mesh model.  Both models were very large, about 16 GB each.  I made several decimated and textured models for export.

10.  I exported directly from Photoscan to Sketchfab.  I also made some .ply files, as well as orthophotos.

 

The Goods:

3689570055?profile=originalHere's a small version of the orthophoto.  The full resolution version is 0.03 m resolution, meaning each pixel represents 3 centimeters.  I've never been super excited about orthophotos since I work mainly in 3D, but this was easy to make so I figured why not include it.

Download the Full Resolution Orthophoto as Shown Above.

3689570269?profile=originalThis is the sparse point cloud from Photoscan.  Point clouds are our bread and butter in our lab.  I think this one turned out rather nicely.

Download the Sparse Point Cloud (.ply) as Shown Above.

Download the Sparse Point Cloud (.las) as Shown Above.

3689570214?profile=originalOnce you zoom in, you can see why it's called a sparse point cloud.  This is a the same cloud as the previous image, but cropped down to just the library and zoomed in.  Obviously, roofs and lawns get a lot more points than the sides of buildings.

3689570286?profile=originalHere's that same view of the library, but with the dense point cloud.  A lot nicer!  I had to do only the library on dense, because dense cloud processing time is prohibitively long.  But I feel it needs mentioning, the entire campus could be processed to this level of detail given enough patience and a supercomputer.  Notice how the points on the tan roofs and the grass are so dense as to look like a solid, but the white roofs and the sides of the building are not as dense.

Download the Dense Point Cloud (.ply) as Shown Above.

Download the Dense Point Cloud (.las) as Shown Above.

Now it's time for some 3D meshes!  Obviously, the raw mesh product is a prohibitively large file.  So I performed decimations and cropped to smaller areas.  I have Sketchfab pages and .ply files!

3689570072?profile=originalThis is the full campus!  It had to be decimated pretty heavily to fit onto Sketchfab.

Click Through to Sketchfab.

Download the .ply file.

3689570240?profile=originalFor whatever reason, the Sketchfab models get very wiggly on some of these models.  I recommend downloading the .ply for seeing the best texture.

Click Through to Sketchfab.

Download the .ply file.

3689570093?profile=originalSketchfab for this one is very wiggly, I recommend the .ply.

Click Through to Sketchfab.

Download the .ply file.

3689570359?profile=originalAnother nice one.  These apartments were easy work for Photoscan.

Click Through to Sketchfab.

Download the .ply file.

3689570337?profile=originalSorry the colors are so dim in these Meshlab screenshots, I am very new at Meshlab.

Click Through to Sketchfab.

Download the .ply file.

3689570380?profile=original

If you zoom in on the parkin garage, you will see that a couple of cars look transparent and ghostly.  This is because the car either left or pulled in between the copter's multiple passes.

Click Through to Sketchfab.

Download the .ply file.

 

The following series of images shows each progressively more complex representation of the 3D model available from Photoscan.  They're from my most complex model, screenshotted straight out of Photoscan.  This model is so big that I cannot open it with any external programs, I have to decimate it.

3689570406?profile=original

 

This is just the sparse point cloud, the most basic representation.  Notice some surfaces have no or few points, like the sidewalks and some roofs.  This is because plain white objects have few identifiable features.

3689570459?profile=original

This is the wire mesh representation.  I had to zoom in to make the individual polygons visible.

3689570484?profile=original

Now we have the solid mesh.  It is like the wire mesh, but with each polygon filled in.  This representation is good for examining the shape of your model without visual clues in the texture changing how the shapes appear.

3689570388?profile=original

Next is the shaded solid!  Photoscan assigns each polygon some diffuse colors.  Since the polygons in this model are so small, this gives a decent representation.

3689570420?profile=original

The final textured model.  This is as realistic looking as it gets, for this scan.

 

And finally, I'd like to show some screenshots from the high quality model:

3689570499?profile=original

3689570509?profile=original

3689570433?profile=original

3689570571?profile=original

3689570051?profile=original

 

All in all, this project was a cool experience and I'm glad the T3 contest prompted me to do it.  I definitely learned a few things:

  • Plain white roofs make poor reconstructions, because they have very few identifiable features.  If you are trying to capture a bright white roof, try dialing down the camera's exposure in an attempt to make the roof less washed out.
  • If you are trying to accurately capture the texture on the sides of buildings, top down photos won't cut it.  Even though the camera has a wide field of view, all of the sides of buildings are photographed from a high angle.  Additional pictures from the sides of buildings will give you better side textures.
  • If you have enough pictures, moving objects simply disappear.  There was light foot traffic on campus during this scan, but in the 3D models the campus looks like a ghost town.
  • Tall thin objects like lamp posts cannot be captured from 100m up using this camera.
  • It would be a lot easier to tag these photos automatically using APM.  Unfortunately, this camera has no CHDK so I would need a servo to press the shutter, which is complicated.
  • And a bunch more I'll add if I can think of it!

Credit goes to the Ecosynth lab at UMBC (of which I am an intern) for use of their equipment to do this scan.  Check us out at Ecosynth.org.  Credit also goes to Jonathan Dandois (also of Ecosynth) for helping me with georeferencing the photos.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • T3

    @Brit- Thank you!  As the creator of this, I'm probably my own biggest critic.  I'm often wishing certain walls were less lumpy, or certain textures were less distorted.  But I have to remember that the scale of this scan is huge, and when zoomed out it looks pretty nice!

  • the quality and scope of this entry is mindblowing!

  • T3

    Dear Hugues,

    The script works on the assumption that the pictures are taken 0.5 seconds apart (we have measured it, this is a safe assumption.)  Once we manually trim the log file to the appropriate section that corresponds to when the pictures were taken, the script automatically grabs the GPS coordinates for every 0.5 seconds of log and assigns them to pictures.  It isn't exact, but we enter an error of 10m into Photoscan, and it automatically corrects the picture coordinates when it processes them.  

    I neglected to enable attitude logging, although since the camera is hardmounted it would be fairly easy to implement this into the georeferencing script.  We didn't enter any assumptions for attitude.  Attitude would probably help reduce processing time.

    Check out this KMZ orthophoto I just made, you can compare my scan to Google Earth's imagery.  The accuracy seems to be within approximately 3m.

    I would much prefer to use APM's integrated photo georeferencing, but the camera I'm using doesn't support CHDK and I don't have the equipment for a servo trigger.

  • 100KM
    Fantastic work and thanks for the tips. I love creating 3d maps and use to use hypr3d but since they changed it doesn't seem to work anymore. I would've used a plane, very impressive coverage for a multi. Again, excellent work!
  • MR60

    Excellent.

    Can you explain a bit more how you linked each picture with its GPS coordinates? 

    Did you use copter attitude data in the processing flow or did you assume pictures where all taken horizontally?

  • T3

    Thank you for the great feedback guys!  Eko I'd be glad to share my parameter file, but it'll have to wait until Monday when I get back to the lab.

  • Very interesting. Would you share PID's?, Thanks.

  • Moderator

    You are living in the future

  • Developer

    That is fantastic!

  • UN-Believable!

    This is some really amazing stuff.

    Yeah, you definitely want to go 3.1.1.

This reply was deleted.