T3

Hi all!

This is my entry for the T3 Season 2 - The Model challenge: 

A survey of the earliest Celtic lunar calendar based on free open source tools

Magdalenenberg_bei_Villingen.JPG?width=750

 http://de.wikipedia.org/

The "building":

The royal tomb at Magdalenenberg, near Villingen-Schwenningen, in Germany’s Black Forest, is the largest Hallstatt tumulus grave in central Europe, measuring over 320ft (100m) across and (originally) 26ft (8m) high. The royal tomb functioned as a lunar calendar and preserves a map of the sky at Midsummer 618 BC, the presumed date of the burial. The order of the secondary burials around the central royal tomb fits exactly the pattern of the constellations visible in the northern hemisphere at Midsummer in 618 BC, while timber alignments mark the position not of the sunrise and sunset but of the moon, and notably the Lunar Standstill. Lunar Standstills are marked in several ancient cultures (including sites in Colorado and Ohio), usually by standing stones that indicate the point where the moon seems to rise and set in the same place, instead of rising in one place and appearing to move across the sky to set in another.

As such the royal tomb at Magdalenenberg is the earliest and most complete example of a Celtic calendar focused on the moon. Following Caesar’s conquest of Gaul, the Gallic culture was destroyed and these types of calendar were completely forgotten in Europe.

http://www.world-archaeology.com/

See also: http://en.wikipedia.org/

The copter:

  • A Hexacopter (Mikrokopter frame, APM 2.6) based on a design concept by Stehen Gienow.

3689571322?profile=original

The cameras: 

  • Canon Elph 110 HS, one converted to IR

The free open source image and 3D processing chain:

The open source Ground Control Station:

The flight:

  • Flight altitude: 50m AGL 
  • Ground image resolution: 0.7cm

The Results:

  • 3D point could before dense matching:

3689571357?profile=original

  • Clipped 3D point cloud after dense matching (however, not the highest resolution possible):

3689571343?profile=original

You can download this file here (22MB *.zip).

  • RGB orthophoto:

3689571452?profile=original

  • IR orthophoto:

3689571370?profile=original

  • Soil Adjusted Vegetation Index map:

3689571414?profile=original

  • Digital surface model draped over a hill shade model and the IR orthophoto:

3689571474?profile=original

The georeferenced model

  • Google Earth: T3_Magdalenenberg.kmz (with subfolders)
    • including:
      • Flight path
      • Camera trigger locations
      • RGB orthophoto
      • IR orthophoto
      • DSM model
    • Size: 11MB

3689571384?profile=original

The 3D model to play with

  •  WebGL 3D model 
    • it might take some time to load...
    • tested with Firefox, Chrome, IE and Safari (needs to be enabled)
    • this is how it looks (in case it doesn't load)

WebGL.png?width=750

A 3D visualization for some more serious analysis

  • Orthophoto 

3689571436?profile=original

  • Hillshade

3689571497?profile=original

  • Enhanced Vegetaion Index

3689571505?profile=original

The APM Log file:

Some random remarks:

  • All data is georeferenced
  • MicMac (as well as the whole processing chain) is quite complex
  • Due to security reasons I always take off in stabilize mode
  • Landing should have been automatic but needed to be interrupted due to some curious onlookers 
  • APM+Droidplanner is a great professional product
  • It would be great to have an image dataset (or several) available on DIYDrones for comparing image processing software
  • Color calibration is a big issue

Thanks to Gary, DIYdrones, 3DRobotics, and the developers of the free open source tools!

Thorsten

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • T3

    Andy, thanks!

    I just had a look and was surprised but it only were 188 images for the final model.

  • Fantastic demonstration Thorsten; could you let us know how many images you captured for this reconstruction?

  • Thorsten, I haven't used caret but it's on my to-learn list.

  • T3

    Ned, great resources! Have you played around with caret in R? It gives you access to myriads of models and tunes them.

    I am working an a workflow where I use OpenCv for detecting the color reference chart and to automatically adjust the colors of the orthophoto. I hope to start first field tests soon...

  • Thorsten, Most of my random forests work is for satellite image land cover classification and percent forest / shrub cover mapping but I'm also playing around with classification of some aerial imagery. I have some of my scripts and guides on a Bitbucket site: https://bitbucket.org/rsbiodiv/https://bitbucket.org/rsbiodiv/.

    For feature recognition I'm looking for actual applications and in the meantime I'm looking into some open source libraries like OpenCV.

  • T3

    Ned, I'll try to put something together over the next weeks.

    I also use Random Forests quite often. There are millions of methods but RF is very robust and easy to handle. What kind of information are you aiming to extract? Single tree crowns, regions of poor plant growth, ...? Some repository for such algorithms would also be very important. The key to successful extraction in IMHO data preprocessing. The data mining algorithm is secondary.

    You are right: image to image variation as well as color calibration is a big problem. I am currently experimenting with standardized color charts. 

  • T3

    Andrew, thanks! 

  • Seriously impressive work Thorsten.  Congratulations!

  • Hi Thorsten - I agree each application is going to be somewhat different but for me it would be great if there were a place where people would contribute their experience with the different hardware and software components and protocols for different applications. A lot of the tools you used for this post have user guides and some have tutorials but very little is available in the context of low altitude small format photography. I wouldn't want to focus on a single global work flow but instead focus on the components using tutorials and supplement that with case studies / examples to show work flows for specific applications. I've considered setting something like that up myself but I'm a bit clueless about where to start. Maybe I should just start and figure it out...

    For information extraction I'm loosely following three paths: automated classification, hybrid visual/automated methods, and feature recognition. I'm not doing any algorithm development but trying to figure out which existing algorithms can be adapted for our work. My current work is experimenting with image segmentation as a preprocessing step and random forests for a classification algorithm. I think the greatest gains for now are to be had by developing image acquisition protocols to reduce image-to-image variation and enhance detection of the features of interest so I've been putting thought into that as well.

  • T3

    Hi Ned! Thanks to your advice from a discussion over at publiclab.org I am using two cameras. I also started with your great PhotoMonitoringPlugin to overlay the RGB and the IR images. However, i needed more fine tuning and another workflow. Hence, I am using the underlying ImageJ plugins instead. This is a little bit part of the problem with setting up a wiki/workflow: you need manual fine tuning and different paths at several places and for different setups, cameras, images ...

    I am working a lot with information extraction, especially supervised spatial data mining. But I agree that often very specific algorithms need to be developed. What do you have in mind?

This reply was deleted.