Posted by Mark Willis on November 1, 2009 at 11:54am
Heya Amigos, I'm an archaeologist using aerial photography to document archaeological sites around the world. It is really inspiring to see all the innovation taking place here!I fly both blimp and kite aerial photography rigs and create 3D photogrammetrical models from the photographs that I take. (See here for any example of what I mean - http://70.114.146.89/~mwillis/Turtle_Ridge_Poster01.pdf and https://www.youtube.com/watch?v=nJgvLll57f0 ).Part of the process of creating the digital models is knowing the exact position of the camera while it's in the sky. Currently, I calculate this by putting a number of targets on the ground that show up in each photograph. I shoot in the XYZ location of each target with a Total Data Station. Using these data I'm able to figure out the yaw, pitch, roll, and XYZ for each photograph taken and from that recreate the shape of the earth below. This is great stuff but the process of shooting in the ground control points (targets) can take a very long time and the terrain I work in can make it very complicated.The entire process could be simplified if I had a sensor recording these data on the camera rig. Can anyone point me in direction of an IMU that can record tight elevation values and possible have a differentially-corrected GPS? Any advice, is appreciated.By the way, I'm looking to do this from an UAV heli at some point but it looks like the technology isn't quite there yet.Thanks,Mark
You need to be a member of diydrones to add comments!
hi Mark
i`m interested in your work.and i have a similar project with yours.My project want uss heli as a platform.The AHRS may use one heli autopilot record.The data process we will use ERDAS-LPS and other gis software.i am glad to hear any advice about this work.
Thanks
ZIFC
The snow has melted today so this is already an archeology.
The camera is Sony Webbie 5mpix at 120m altitude (I recommend 200m for better overlap).
Processing of the logs and landing was automatic.
The photo stitching was manual and took around 3h including learning of the software (will be 1-1.5h in the future per case).
I have downgraded resolution per photo 4 times (from 5mpix to 1.25mpix) otherwise the filesize would be insane.
Around 70 photos have been used, around 12min flight time in -10C temperature).
You can observe the autopilot has been enabled some 10s after takeoff.
You're using the right methods but technology isnt there yet for "tight" elevation readings. I'm waiting on affordable LiDAR systems to map the ground but the prices need to go down drastically.
Have you tried overlaying your images on top of DEM data? It might help with the visuals... Arc GIS/INFO can do it.
To help you with your search...you might want to talk to these guys.
"Regarding the inclusion of GPS - I have some thoughts.
I currently work in a navigation research group at the university of calgary, where a lot of people work with GPS+INS integration. For the group, I have personally designed and built a pair of pedestrian data collection units that synchronize and log data from
1x Novatel GPS+Glonass high accuracy/high rate receiver (for outdoor)
1x SirfIII High sensitivity receiver (for "indoor" - concrete still makes this nearly unworkable)
1x Digital barometer
1x tri-axial magnetometer
9x Crista Inertial measurement units (have recently built a pair of ADIS16350 based crista clones that are compatible)
2x uncomitted serial ports
It logs all of this to SD, but the most important characteristic of the system is that it synchronizes and time stamps everything save the uncomitted serial ports with respect to the pulse per second and timing data from the novatel receiver down to 1/36000 of a second.
Basically, the reason I'm buying your sweet camera is some people in the group want to branch out into integrating image data into the navigation solution, and I'm hoping there's some way to have the capture time of the pictures time-stamped relative to an external pulse per second signal.
Also, for the benefit of your project, I wholeheartedly recommend making use of a GPS/GNSS receiver which outputs a pulse per second timing signal. After all, every GPS receiver is solving intrinsically for X,Y,Z and Time(error)... why deny yourself 1/4 of the data? Wink
If I recall properly some of the Ublox units don't output PPS.
"
As Steve suggested photogrammetric block adjustment is the solution to decrease field work to minimum. You don't need accurate XYZ for the camera position, just initial guesses that are allowed to be meters or tens of meters off the true position. So you can estimate XYZ with maps or other reference material without having to apply IMU on-board. What you need, are accurate ground points, not on every image but in the corners of the image block. I.e. some 4 or 8 points are enough to accurately calculate roll-pitch-yaw-X-Y-Z for each image. With careful work you''ll obtain accuracy of 1 pixel for ground xy and 1.5-2 pixels for ground z. The software to do it is here www.mosaicmill.com/products/software.html
To the hardware - there are UAV helis capable of doing what you need, from several manufacturers. With a price tag, though.
Cheers,
Janne
Mark,
if you want to maintain the accuracy you are working with then you are probably using the best method. A 1 degree error in the IMU will give you about 2' error on the ground from 100' altitude. That 1 degree accuracy needs to take into account not only the measurement accuracy of the IMU but mounting errors between the camera and the IMU - these can be reduced by calibration but that is a little troublesome as well.
Software is probably the only way you will reduce your fieldwork. See the packages I have mentioned earlier.
I was hoping that telemetry gathered with the right sensors would provide good initial orientation values for each photograph. As I understand it, this is same process that airplane based photogrammetry uses today.
My current workflow involves taking hundreds of photographs in a rough (very rough) grid pattern over the archaeological site. The flight grid is rough because of changes in the wind and from maneuvering around obstacles on the ground. One of the reasons that I'm looking into electric heli based photography is that it should provide a more regular flight pattern.
To answer one of the questions above, the accuracy of the 3D models vary but I typically get .05 m for X and Y and .1m for Z. This accuracy is achieved when placing targets at 10 m intervals and when I gather the coordinates of the targets with a TDS.
Hi Mark,
Nice work! Archaeology seems like a perfect application for aerial photography from UAVs, although I can imagine measuring all those control points on the ground is not very practical with your current method. But I'm afraid even the most accurate GPS/IMU alone won't give you good enough position and orientation data for the purpose.
What you need to do instead is take photos on a grid pattern with good overlap, then do a photogrammetric block adjustment to solve for the true position and orientation of each photo station. You only need approximate initial values.
Mark,
what sort of accuracy are you getting and what are you after?
I have looked into using various camera orientation measurement methods for my own projects. My view now is that the camera should be stabilised as horizontal as possible then followed by corrections and processing such as you are already doing but there are other solutions.
Replies
i`m interested in your work.and i have a similar project with yours.My project want uss heli as a platform.The AHRS may use one heli autopilot record.The data process we will use ERDAS-LPS and other gis software.i am glad to hear any advice about this work.
Thanks
ZIFC
Using PTGUI and EasyUAV
http://www.aerialrobotics.eu/examples/WinterPhotomapping.kmz
(warning 27megs GoogelEarth file, may take up to 15s to process during opening n GE).
Details about EasyUAV are here
http://www.rcgroups.com/forums/showthread.php?t=1137076&highlig...
The snow has melted today so this is already an archeology.
The camera is Sony Webbie 5mpix at 120m altitude (I recommend 200m for better overlap).
Processing of the logs and landing was automatic.
The photo stitching was manual and took around 3h including learning of the software (will be 1-1.5h in the future per case).
I have downgraded resolution per photo 4 times (from 5mpix to 1.25mpix) otherwise the filesize would be insane.
Around 70 photos have been used, around 12min flight time in -10C temperature).
You can observe the autopilot has been enabled some 10s after takeoff.
Have you tried overlaying your images on top of DEM data? It might help with the visuals... Arc GIS/INFO can do it.
To help you with your search...you might want to talk to these guys.
"Regarding the inclusion of GPS - I have some thoughts.
I currently work in a navigation research group at the university of calgary, where a lot of people work with GPS+INS integration. For the group, I have personally designed and built a pair of pedestrian data collection units that synchronize and log data from
1x Novatel GPS+Glonass high accuracy/high rate receiver (for outdoor)
1x SirfIII High sensitivity receiver (for "indoor" - concrete still makes this nearly unworkable)
1x Digital barometer
1x tri-axial magnetometer
9x Crista Inertial measurement units (have recently built a pair of ADIS16350 based crista clones that are compatible)
2x uncomitted serial ports
It logs all of this to SD, but the most important characteristic of the system is that it synchronizes and time stamps everything save the uncomitted serial ports with respect to the pulse per second and timing data from the novatel receiver down to 1/36000 of a second.
Basically, the reason I'm buying your sweet camera is some people in the group want to branch out into integrating image data into the navigation solution, and I'm hoping there's some way to have the capture time of the pictures time-stamped relative to an external pulse per second signal.
Also, for the benefit of your project, I wholeheartedly recommend making use of a GPS/GNSS receiver which outputs a pulse per second timing signal. After all, every GPS receiver is solving intrinsically for X,Y,Z and Time(error)... why deny yourself 1/4 of the data? Wink
If I recall properly some of the Ublox units don't output PPS.
"
http://www.surveyor.com/cgi-bin/yabb2/YaBB.pl?num=1239309094/0
Cheers,
Tenzin
As Steve suggested photogrammetric block adjustment is the solution to decrease field work to minimum. You don't need accurate XYZ for the camera position, just initial guesses that are allowed to be meters or tens of meters off the true position. So you can estimate XYZ with maps or other reference material without having to apply IMU on-board. What you need, are accurate ground points, not on every image but in the corners of the image block. I.e. some 4 or 8 points are enough to accurately calculate roll-pitch-yaw-X-Y-Z for each image. With careful work you''ll obtain accuracy of 1 pixel for ground xy and 1.5-2 pixels for ground z. The software to do it is here www.mosaicmill.com/products/software.html
To the hardware - there are UAV helis capable of doing what you need, from several manufacturers. With a price tag, though.
Cheers,
Janne
if you want to maintain the accuracy you are working with then you are probably using the best method. A 1 degree error in the IMU will give you about 2' error on the ground from 100' altitude. That 1 degree accuracy needs to take into account not only the measurement accuracy of the IMU but mounting errors between the camera and the IMU - these can be reduced by calibration but that is a little troublesome as well.
Software is probably the only way you will reduce your fieldwork. See the packages I have mentioned earlier.
Mike
My current workflow involves taking hundreds of photographs in a rough (very rough) grid pattern over the archaeological site. The flight grid is rough because of changes in the wind and from maneuvering around obstacles on the ground. One of the reasons that I'm looking into electric heli based photography is that it should provide a more regular flight pattern.
To answer one of the questions above, the accuracy of the 3D models vary but I typically get .05 m for X and Y and .1m for Z. This accuracy is achieved when placing targets at 10 m intervals and when I gather the coordinates of the targets with a TDS.
Thanks for all the input!
-Mark
Nice work! Archaeology seems like a perfect application for aerial photography from UAVs, although I can imagine measuring all those control points on the ground is not very practical with your current method. But I'm afraid even the most accurate GPS/IMU alone won't give you good enough position and orientation data for the purpose.
What you need to do instead is take photos on a grid pattern with good overlap, then do a photogrammetric block adjustment to solve for the true position and orientation of each photo station. You only need approximate initial values.
The technology IS there.
Cheers,
Steve
what sort of accuracy are you getting and what are you after?
I have looked into using various camera orientation measurement methods for my own projects. My view now is that the camera should be stabilised as horizontal as possible then followed by corrections and processing such as you are already doing but there are other solutions.
Mike
http://www.rc-cam.com/forum/index.php?/topic/3278-easyuav-personal-...