Problem 1:

The APM .log files contain CAM messages based on when the autopilot commanded the camera to be triggered. It doesn't know when or if the camera actually triggered. This can result in CAM records that don't have corresponding photos or latency issues, both can result in difficulty in the Geotag process.

Problem 2:

I'm using a Sony NEX in a roll gimbal. The .log file contains the IMU attitude data for the aircraft, not for the camera. I would like to be able to feed the camera's IMU data into the image processing software.

The Solution:

These problems were solved by the Field Of View GeoSnap

However they want $10,000 for it, and it is big and heavy. How can we DIY this?

We could use a separate APM mounted directly on the back of the camera. The new breed of mini APMs would fit nicely and can be found for $40 online (no 3DR mini option).3691146320?profile=originalWe would need to feed this with the GPS data which could be simply tapped with a splice off the aircraft's GPS receiver. The same could be done with the magnometer.

This APM could be used as a stand alone triggering device if desired. If triggering by distance, it wouldn't even need a flight plan, unless you wanted a couple waypoints to start/stop the triggering with DO_SET_ CAM_TRIGG_DIST. You could use a very large WP Radius to make sure they are not missed.

Here is the part that gets more difficult and requires some code modification...

How do we tell this APM that the camera has actually fired so it can record that event along with the GPS and IMU data at that moment?  For the Sony NEX the iISO Hot Shoe could provide the signal that the camera has fired.3691146236?profile=original This is the solution that the GeoSnap uses. There are a number of cheap eBay hot shoe adapters for around $8 that could be hacked to provide an easy slide in connection without soldering to the contacts. Cameras without a hotshoe could be hacked to provide a trigger signal from other sources, like perhaps the speaker. We may need a voltage divider or multiplier.

So now the question is, how will the APM read this trigger signal, and how can we have each event recorded in the logs? This is where I need help from the community as it exceeds my knowledge base. Could we simply use the voltage pin to detect the spike from the hot shoe, or would it be better to write new code for a new sensor pin, or I2C perhaps? Is there already any code in place that can provide a sort of reverse relay? Like for a proximity switch or a another sensor that provides a simple on/off switch?

Then the next issue is getting the trigger signals recorded in the DF log. Should it replace the CAM message, or be recorded in addition to it? If we had a CAM2 log that recorded the actual trigger moment, and the old CAM log that measured when the trigger was commanded by the APM, we would be able to measure the latency and troubleshoot issues.

So what do you guys think? Would others be interested in this? Who could help write the code modifications?

Looking forward to some brainstorming.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –


    • I'm actually working on something new, but it's in research stage.

      That geosnap solution is criminally expensive. 

      Here's what you could look at:

      - the flash or "strobe" signal is the best way to do this. That's probably a signal that fires halfway into the "shutter-open" time. However, it's possible that when the shutter time is really low (fast shutter), the signal is not triggered. Or it's possible that the camera only sends the trigger signal in specific lighting conditions, so keep that in mind when testing.

      - The P&S cameras probably don't have an external flash connection like the above, but may have a "strobe" pin somewhere inside, hidden... like many canon cams also have a hidden uart on their board.

      - CHDK has a special mode where you can sync with multiple cameras. Here you send a signal once to start the picture taking process (exposure, gains, etc), then send another signal to take the actual picture. Unfortunately, this again is control without feedback, so you can't tell whether it actually did it:

      The weirdest way to do this?  It may be possible to force the flash on, put together a simple schematic with an arduino, a light sensor and then hook up the light sensor to when the on-board flash fires. This may not work if the camera calculates the shuttter speed differently, assuming the flash actually floods the environment (you get dark photos), but it may be possible to compensate for those light losses.

      Maybe the P&S camera has an internal pin that goes high when the flash fires. 

      Hooking it up to the flash directly is not recommended :). Flashes are designed to store high charges and release a very short, high voltage to a lamp to make it flash. So you don't want to send that voltage signal to any sensitive electornics.

  • For problem 1, you can use the time taken of the photo in the EXIF data together with the cam timestamps to figure out which goes where. The delay between the photo and the CAM message really shouldn't be more than 200ms, or the camera hasn't been calibrated correctly (focus still set to auto, etc.). Basically, you can figure out from logs, timestamps and the timing on the camera where you may have photos missing and skip to the next CAM message.

    For problem 2, you don't need the attitude information for these softwares anymore. The gimbal is there to ensure you get as close to nadir view as possible.

    Better measures of the GPS may slightly improve the positional accuracy, but I'm not sure whether this is worth the effort, because you need to look at the application area for each method. Since GPS is 5Hz usually, there will always be some travel between the actual shutter position and what you logged, plus the inaccuracy of 1m of the GPS on a mobile device that is variant in actual speed. The only way to reliably improve accuracy is to use ground control points.

    • We are using an IR trigger that has an inconsistent delay, sometimes resulting in missing images. Finding out where the missing images occur is problematic and time consuming since often work with large datasets. For example, to map 1 sq km at 1.5 cm GSD takes over 2800 images with our Sony NEX-7. Moreover, the EXIF time stamps are only to the nearest second, but depending on our flying speed, the camera may be triggered in sub-second intervals. 

      Of course, we could do image processing without any data on camera positions, but that adds a significant amount of hours on top of an already lengthy computational process. A hardware solution that can reliably produce GPS + IMU data to match the images would be ideal. Redundancy is helpful for saving time and could even make a difference by not requiring additional fieldwork to make up for lost data. However, if there are software solutions that could overcome these issues, that would be even better.

      • Here's a script that goes through the APM log file and extracts GPS + CAM information and uses the EXIF time of the photo with second resolution.The way it works is that it assumes the first photo taken succeeded. It uses that first photo EXIF time to calculate the delta between the camera time and gps time (the first CAM message). Then it rereads the photos, it rereads the log and tries to find the closest position. In my case I didn't bother to interpolate the GPS position, but you could attempt that as an extra.

        This script was applied to a survey where the vehicle traveled 3 m/s, so I'd be max 3m off.

        In your case, with sub-second intervals you'd get the same position for a couple of photos.

        If you have extra knowledge on the spacing or minimum delay between photos, you can exploit that to refine your timings. for example, if 3 photos were taken and you know there was a 250ms delay minimum, you'd apply 0.75, 1.0 and 1.25 as times instead. When you reapply the GPS data, you'd get different positions for them again.

        This sensor data however is not the ground truth in orthophoto/model construction, the actual pixel data is. The values that you insert eventually determine the orientation and placement (with some margin of error) and help to establish the parameters of the camera (radial / tangential distortion ). The good news is that if you take many measurements, the error is much less than that for a single photo. Plus, I don't think the current softwares pay that much attention to attitude data anyway. Not sure you can even import it in some to begin with, because at a certain distance a minute change in degrees has enormous consequences for matching, so from that perspective it's practically useless (whereas "slight" positional errors have less consequences).
        • Hi, 

          i just tried your script, but i cannot get it to run. I get a "Segmentation fault 11" i am running the script on a OSX with python 2.7.11 and pyexif2-0.3.2.

          Any idea what the Problem could be?

          thanks a lot!!

This reply was deleted.


DIY Robocars via Twitter
RT @knightsautoteam: Hi @diyrobocars, we are Orlando's first Autonomous racing club and would love your support. We are hosting our first K…
Jan 20
DIY Robocars via Twitter
RT @Heavy02011: #VirtualRaceLeague: @DIYRobocars Race #14 - #ParkingLotNerds join us January 15th for #AutonomousRacing #RoboRace ⁦@DAVGtec…
Jan 16
DIY Robocars via Twitter
RT @chr1sa: And after that came our races, 50 in all. This battle between these two Russians was the best we've ever seen -- incredible fig…
Jan 16
DIY Robocars via Twitter
RT @chr1sa: Before our @DIYRobocars virtual race this weekend, we had a presentation from the team that won the Indy Autonomous Challenge i…
Jan 16
DIY Drones via Twitter
Dec 12, 2021
DIY Robocars via Twitter
Dec 12, 2021
DIY Robocars via Twitter
RT @chr1sa: Just a week to go before our next @DIYRobocars race at @circuitlaunch, complete with famous Brazilian BBQ. It's free, fun for k…
Dec 4, 2021
DIY Robocars via Twitter
How to use the new @donkey_car graphical UI to edit driving data for better training
Nov 28, 2021
DIY Robocars via Twitter
RT @SmallpixelCar: Wrote a program to find the light positions at @circuitlaunch. Here is the hypothesis of the light locations updating ba…
Nov 26, 2021
DIY Robocars via Twitter
RT @SmallpixelCar: Broke my @HokuyoUsa Lidar today. Luckily the non-cone localization, based on @a1k0n LightSLAM idea, works. It will help…
Nov 25, 2021
DIY Robocars via Twitter
@gclue_akira CC @NVIDIAEmbedded
Nov 23, 2021
DIY Robocars via Twitter
RT @luxonis: OAK-D PoE Autonomous Vehicle (Courtesy of zonyl in our Discord:
Nov 23, 2021
DIY Robocars via Twitter
RT @f1tenth: It is getting dark and rainy on the F1TENTH racetrack in the @LGSVLSimulator. Testing out the new flood lights for the racetra…
Nov 23, 2021
DIY Robocars via Twitter
RT @JoeSpeeds: Live Now! Alex of @IndyAChallenge winning @TU_Muenchen team talking about their racing strategy and open source @OpenRobotic…
Nov 20, 2021
DIY Robocars via Twitter
RT @DAVGtech: Live NOW! Alexander Wischnewski of Indy Autonomous Challenge winning TUM team talking racing @diyrobocars @Heavy02011 @Ottawa…
Nov 20, 2021
DIY Robocars via Twitter
Incredible training performance with Donkeycar
Nov 9, 2021