3D reconstruction of largest Hallstatt tumulus grave in central Europe (T3 entry)

Hi all!

This is my entry for the T3 Season 2 - The Model challenge: 

A survey of the earliest Celtic lunar calendar based on free open source tools

 http://de.wikipedia.org/

The "building":

The royal tomb at Magdalenenberg, near Villingen-Schwenningen, in Germany’s Black Forest, is the largest Hallstatt tumulus grave in central Europe, measuring over 320ft (100m) across and (originally) 26ft (8m) high. The royal tomb functioned as a lunar calendar and preserves a map of the sky at Midsummer 618 BC, the presumed date of the burial. The order of the secondary burials around the central royal tomb fits exactly the pattern of the constellations visible in the northern hemisphere at Midsummer in 618 BC, while timber alignments mark the position not of the sunrise and sunset but of the moon, and notably the Lunar Standstill. Lunar Standstills are marked in several ancient cultures (including sites in Colorado and Ohio), usually by standing stones that indicate the point where the moon seems to rise and set in the same place, instead of rising in one place and appearing to move across the sky to set in another.

As such the royal tomb at Magdalenenberg is the earliest and most complete example of a Celtic calendar focused on the moon. Following Caesar’s conquest of Gaul, the Gallic culture was destroyed and these types of calendar were completely forgotten in Europe.

http://www.world-archaeology.com/

See also: http://en.wikipedia.org/

The copter:

  • A Hexacopter (Mikrokopter frame, APM 2.6) based on a design concept by Stehen Gienow.

The cameras: 

  • Canon Elph 110 HS, one converted to IR

The free open source image and 3D processing chain:

The open source Ground Control Station:

The flight:

  • Flight altitude: 50m AGL 
  • Ground image resolution: 0.7cm

The Results:

  • 3D point could before dense matching:

  • Clipped 3D point cloud after dense matching (however, not the highest resolution possible):

You can download this file here (22MB *.zip).

  • RGB orthophoto:

  • IR orthophoto:

  • Soil Adjusted Vegetation Index map:

  • Digital surface model draped over a hill shade model and the IR orthophoto:

The georeferenced model

  • Google Earth: T3_Magdalenenberg.kmz (with subfolders)
    • including:
      • Flight path
      • Camera trigger locations
      • RGB orthophoto
      • IR orthophoto
      • DSM model
    • Size: 11MB

The 3D model to play with

  •  WebGL 3D model 
    • it might take some time to load...
    • tested with Firefox, Chrome, IE and Safari (needs to be enabled)
    • this is how it looks (in case it doesn't load)

A 3D visualization for some more serious analysis

  • Orthophoto 

  • Hillshade

  • Enhanced Vegetaion Index

The APM Log file:

Some random remarks:

  • All data is georeferenced
  • MicMac (as well as the whole processing chain) is quite complex
  • Due to security reasons I always take off in stabilize mode
  • Landing should have been automatic but needed to be interrupted due to some curious onlookers 
  • APM+Droidplanner is a great professional product
  • It would be great to have an image dataset (or several) available on DIYDrones for comparing image processing software
  • Color calibration is a big issue

Thanks to Gary, DIYdrones, 3DRobotics, and the developers of the free open source tools!

Thorsten

Views: 4857


T3
Comment by Thorsten on February 2, 2014 at 9:41am

Hi Ned! Thanks to your advice from a discussion over at publiclab.org I am using two cameras. I also started with your great PhotoMonitoringPlugin to overlay the RGB and the IR images. However, i needed more fine tuning and another workflow. Hence, I am using the underlying ImageJ plugins instead. This is a little bit part of the problem with setting up a wiki/workflow: you need manual fine tuning and different paths at several places and for different setups, cameras, images ...

I am working a lot with information extraction, especially supervised spatial data mining. But I agree that often very specific algorithms need to be developed. What do you have in mind?

Comment by Ned Horning on February 2, 2014 at 10:26am

Hi Thorsten - I agree each application is going to be somewhat different but for me it would be great if there were a place where people would contribute their experience with the different hardware and software components and protocols for different applications. A lot of the tools you used for this post have user guides and some have tutorials but very little is available in the context of low altitude small format photography. I wouldn't want to focus on a single global work flow but instead focus on the components using tutorials and supplement that with case studies / examples to show work flows for specific applications. I've considered setting something like that up myself but I'm a bit clueless about where to start. Maybe I should just start and figure it out...

For information extraction I'm loosely following three paths: automated classification, hybrid visual/automated methods, and feature recognition. I'm not doing any algorithm development but trying to figure out which existing algorithms can be adapted for our work. My current work is experimenting with image segmentation as a preprocessing step and random forests for a classification algorithm. I think the greatest gains for now are to be had by developing image acquisition protocols to reduce image-to-image variation and enhance detection of the features of interest so I've been putting thought into that as well.

Comment by Andrew Rabbitt on February 2, 2014 at 6:11pm

Seriously impressive work Thorsten.  Congratulations!


T3
Comment by Thorsten on February 3, 2014 at 1:16am

Andrew, thanks! 


T3
Comment by Thorsten on February 3, 2014 at 1:31am

Ned, I'll try to put something together over the next weeks.

I also use Random Forests quite often. There are millions of methods but RF is very robust and easy to handle. What kind of information are you aiming to extract? Single tree crowns, regions of poor plant growth, ...? Some repository for such algorithms would also be very important. The key to successful extraction in IMHO data preprocessing. The data mining algorithm is secondary.

You are right: image to image variation as well as color calibration is a big problem. I am currently experimenting with standardized color charts. 

Comment by Ned Horning on February 3, 2014 at 6:41am

Thorsten, Most of my random forests work is for satellite image land cover classification and percent forest / shrub cover mapping but I'm also playing around with classification of some aerial imagery. I have some of my scripts and guides on a Bitbucket site: https://bitbucket.org/rsbiodiv/https://bitbucket.org/rsbiodiv/.

For feature recognition I'm looking for actual applications and in the meantime I'm looking into some open source libraries like OpenCV.


T3
Comment by Thorsten on February 3, 2014 at 7:05am

Ned, great resources! Have you played around with caret in R? It gives you access to myriads of models and tunes them.

I am working an a workflow where I use OpenCv for detecting the color reference chart and to automatically adjust the colors of the orthophoto. I hope to start first field tests soon...

Comment by Ned Horning on February 3, 2014 at 11:11am

Thorsten, I haven't used caret but it's on my to-learn list.

Comment by AndyC on September 24, 2014 at 11:59am

Fantastic demonstration Thorsten; could you let us know how many images you captured for this reconstruction?


T3
Comment by Thorsten on September 24, 2014 at 12:53pm

Andy, thanks!

I just had a look and was surprised but it only were 188 images for the final model.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service