Some tests on GPS, geotagging and stitching accuracy

Hi all,

I want to share some results of comparisons between different GPS modules as well as different approaches for geotagging and stitching images based on APM:Copter log files.

CAM_TRIGG_DIST (blue) vs. adjusted coordinates (green). Stitching is based on corrected coordinates.

 

The background is that we were aiming for a tool, which would allows us to check the image and geotagging quality immediately after completing a survey mission. So, in case of any camera triggering or geotagging errors (as well as image quality problems) one can repeat the mission again with different settings without having to wait for any complex and time consuming processing in the office. Additionally, I was interested in the positional accuracies one could expect from repeated survey missions and with different GPS modules. Hence, this post focuses on different aspects of positional accuracy. It provides a short summary of our experiments. It is not a scientific study and most of the tests were not repeated. So this is just intended to be a rough guide. Anyway, I hope it will be helpful to some of you.

1)        A comparison of the PDOP (Dilution of Precision) values between different common GPS modules for the Pixhawk autopilot

The crucial component for geotagging images as well as for geo-referencing in general is the GPS receiver. There are mainly 2 different u-blox systems currently in use together with the Pixhawk: the LEA 6H  and the NEO M8N.

For a test I mounted the following three GPS on my copter:

Test setup: 3DR LEA 6H, VR M8N and CSG M8N

The major difference between the two M8 modules is that the CSG M8 has a larger antenna (there is a „mini“ version with a smaller antenna available as well) whereas the VR M8 has an additional amplifier. VR M8 was connected to the Pixhawk as GPS1 and the 3DR 6H as GPS2. The CSG M8 was connected to my Laptop via USB directly. I used Mission Planner to monitor the VR and 3DR GPS modules and u-center to monitor the CSG one. Because AC3.2 is reporting is the PDOP and not the HDOP, we need to compare the AC3.2 HDOP with the PDOP reported in u-center. It is also important to set the "Min SV elevation" to the same values. To compare the performance I tested all three modules in parallel. In a first test I compared the performance outdoor and indoor (close to a window).

Results of the outdoor test

  • The LEA 6H always shows less satellites and a higher PDOP compared to the other two (~1.7 vs. 1.1)
  • The VR M8 is comparable to CSG M8 when the "Min SV elevation" of the latter is set 5deg. If "Min SV elevation“ of the CSG is set to 10deg the VR is slightly better. This is why I assume that "Min SV elevation" is set to 5deg on the VR M8.
  • The CSG M8 was the first that got a 3D Fix and it was better for quite a while. Interestingly, the position accuracy of both the VR and the 3DR increased significantly after re-booting the Pixhawk. This is a little strange to me but I experienced this more than once.

Outdoor test: CSG with Min SV elevation set to 5°

Results of the indoor test

  • After waiting 15 min the CSG had a 3D Fix.
  • After 20 min the 3DR came alive (gpsstatus2 = 3).
  • Even after 40 min the VR still showed gpsstatus2 = 1.
  • When setting the "Min SV elevation" to 5 on the CSG the results went a little worse and were more noisy.

The indoor results are reproducible. That's why I went outside - I thought the VR M8 was dead but luckily that is not the case. Roberto from VR reckons that the internal pre-amp could go in saturation if there is some interference.

Remarks

  • always use an M8 instead of a 6H
  • the CSG M8 is the most sensitive
  • this must be due to the larger antenna even though the VR has an amp
  • the reason why the results went much better directly after rebooting should be investigated, since this is not the case for the CSG one (or at least I haven't realized before) and because it should result in better positional accuracies at least in the beginning of the flight. Since, the VR and the 3DR GPS were connected to the Pixhawk the problem might also be related to the autopilot. Can someone confirm this behavior?
  • the outdoor accuracy seems better and less noisy if "Min SV elevation“ is set to 5°. However, with this setting the receiver might be more affected by multipath errors as shown by the higher noise for this setting in the indoor test. Hence, for security reasons 10° might be the better choice.
  • There was no obvious difference if a GPS antenna was covered with the 3DR GPS cover or not.
  • I haven’t tested it with the VR M8 but the CSG M8 is very sensitive to interference from the bluetooth telemetry module - even if mounted >20 cm away from the GPS. The 3DR LEA is less sensitive.
  • All tests were conducted without an additional shielding of the GPS module. A first test with an additional shielding between the VR M8 and the Pixhawk et al. shows marginally higher number of satellites (20 instead of 18) and a little lower DOP values. But this has to be verified. I also have to test if there is a reduction regarding the bluetooth interference.

2)         Camera trigger delay and direct image stitching

For many applications a simple image overlay is sufficient. The advantage compared to the generation of orthophotos is that the mosaic can be generated quickly in the field. However, even stitching can be relatively time consuming - especially with large image sets and if image registration is performed using automated image-matching techniques. Hence, we were aiming for an approach that provides georeferenced image mosaics that rely on the GPS coordinates only.

To achieve this aim, we

Hence, the CAM entries in the log file can be used for geotagging the images. The GPS used in all following test was the Virtual Robotics NEO M8N. To achieve optimal image quality the camera settings have to be adjusted for every single image. This results in some delay between triggering the camera and shooting the image.

 

Direct stitching results

The following two figures (I moved the first one to the top of the post) shows the coordinates when the camera is triggered (blue) vs. automatically corrected coordinates (green) where individual velocities and shooing delays have been factored in. The stitched image based on the corrected coordinates is shown in the background.

The mosaic based on CAM_TRIGG_DIST is corrupted. This is due to the fact that CAM_TRIGG_DIST was set too small in this test in relation to speed and camera trigger delay. Hence, some images were not taken. Normally, there is only some offset. Synchronizing the image time stamp with CAM_TRIGG_DIST time will also help to reduce this effect in case images are missing.

CAM_TRIGG_DIST (blue) vs. adjusted coordinates (green). Stitching is based on CAM_TRIGG_DIST.

Remarks

The comparison between the stitched images shows that

  • Geotagging has to be performed carefully to reduce artifacts
  • NADIR gimbals provide an easy way to reduce perspective distortions and guarantee the best viewpoint for many applications (e.g. SAR, Precision Agg)

Processing takes only a fraction of time compared to generating an orthophoto. Stitching is completed within minutes on normal laptops.

3)         Positional GPS accuracy of repeated surveys

What is most interesting is the positional accuracy between directly stitched mosaics of repeated surveys. Therefore, we repeated missions over the same area at two consecutive days. Georeferencing is performed independently for each image and only based on the GPS coordinates provided by the VR M8. Additional ground control points and image matching routines were not used.

We also tested different cameras so the results show a mixture of potential errors.

 

Direct stitching results

The red points in the following image mark same position of different flights.

RGB mosaic, day 1, flight 2

The following screenshots show the potential geo-referencing accuracy without any ground control points and after applying a linear shift (no scaling or rotation) to the images based on Google Earth data. The orthoimage as well as the DSM are perfect. The simple overlay and the mosaic show a small offset/distortion (Some more precise measuring/shifting should result in even better positioning.). Yet, this should be ok for many applications. Because it was snowing during parts of the flight the image quality is rather poor.

 

Orthophoto

Stitched mosaic

Remarks

The results show that in average the PDOP reported by ArduCopter is a quite good estimator of the offset between single images within one mission (although it is called HDOP for unknown reasons). Positional errors can be seen within each mosaic but most obvious in the RGB mosaic.

The offset between the different flights/cameras in the IR/RGB example is up to 4 m and not linear over the entire mosaic. But, as expected, the offset is smaller at the end of the mission indicating that a longer warm up period (and/or reboot?) should reduce the offset in the beginning of a mission. This is why in many cases the directly stitched mosaic fits the true orthophoto quite well and the overall offset is mostly consistent during a mission. The offset can easily be determined using Google Earth and thus corrected. If this is possible and sufficient, ground control points are not necessarily required to provide positional accuracies < 1.5m for a directly stitched mosaic. Yet, orthophotos should be preferred. However, processing takes much longer.

Apart from some small pitch and roll offset from NADIR (calibration error) there is often some small misalignment of the camera so the images might be rotated which needs to be determined and then yaw needs to be adjusted before stitching.

 

4)         AC3.2 „issues“

Regarding AC3.2 there are some things that need to be tackled. These are rather minor „bugs“ but might sum up to some significant offsets of single images and/or the entire mosaic or other inconsistencies:

  • For consistent stitching the exact flight altitude must be known for each image. However, the „altitude“ currently used in AC3.2 is based on baro measurements. The problem is that there can/will be a drift in atmospheric pressure during the flight which can sum up to several meters over longer flights. See link for an example from a 40 min survey.
  • I am not sure if this has a real influence. There is some timing jitter (missing samples) which occurs at least with both M8N (not sure about the 6H) GPS modules. Yet, an measurement offset of only 200ms might produce a considerable spatial offset especially at higher speeds. This jitter also results in some twitching.
  • In addition there is some clock drift or communication latencybetween the GPS module and the Pixhawk. This can also lead to spatial offsets depending on the log file entry one relies on for geotagging.

 

Summary

For surveys where the highest precision is required orthophotos and ground control points are the way to go. This is also true in areas with stronger relief. However, in many cases directly stitches images georeferenced using common GPS modules are sufficient. What is required is proper geotagging and a stabilized NADIR gimbal. This involves good GPS modules as well as specific CHDK scripts, log file analysis and stitching routines. The advantage is that stitching can be processed in field within minutes, which makes it suitable for SAR missions as well as quick overviews for agricultural purposes.

To overcome some positional inaccuracies, image-matching techniques might be applied. However, there is a trade-off regarding processing time. A real boost in accuracy can be expected if RTK DGPS systems (read Piksi) are fully integrated and operational.

Any comments are welcome!

Best regards and a Happy New Year,

Thorsten

PS: A big thank-you goes to all the Ardupilot devs for all of their help and suggestions!

Views: 18925


T3
Comment by Thorsten on December 28, 2014 at 1:55pm

There is a limit in the number of images that can be embedded in a post - obviously 7. So here are some additional ones:

The first is another example of CAM_TRIGG_DIST (red) vs. corrected coordinates (green). In this case using CAM_TRIGG_DIST results in a systematic but image specific offset only (there are no missing images):

This is how stitching (simple overlay) looks with some offset in yaw:

This is a hillshade model related to the orthophoto. It shows the combined accuracy of the GPS and bundle adjustment in MicMac. There was a general shift of 1.74m to the north compared to the Google data.

The following two images are the corresponding flights (IR day 1, flight 1 and IR day 2) to the RGB mosaic (of day 1, flight 2).

Comment by Phill Scott on December 28, 2014 at 3:47pm

Thorsten,  really good work. 

These are the sorts of issues I wasn't looking forward to investigating, so I'm glad you've made a head start.  Quantifying the positional error when taking images, and ensuring your camera is pointing at the ground is really important if you want to go in to the automated use of your collected data rather than using humans to interpret.

Comment by Doug Walmsley on December 28, 2014 at 5:24pm

  Really enjoyed reading your post.  I tested only the CSG NEO-M8N and 3DR LEA-6H GPS's and found the M8N to be far more superior in nearly all facets.  I'm sure 3DR is looking into improving their older technology GPS units.

  Have you considered using a LIDAR system to help maintain accurate altitudes?  What altitude do you see as an optimal altitude for mapping?  Since most folks will be limited to below 400ft without (FAA or whomever manages airspace in various countries) approval to operate higher, the LIDAR maybe of help in holding altitude relative to terrain.

  The positional accuracy will always be an issue.  The fact that you have 18-20 satellite only shows that you won't be jumped to another spot on earth, but the truth of all these GPS's is that none are in the centimeter range for accuracy much the way military systems are.  For free GPS signals, you will be pretty much limited to a drift in position up to 3 meters.  The good news for hobbyist is that the M8N GPS is less likely to lose positional lock and that prevents fly-aways.  I'm sure there are companies who would be willing to sell their GPS systems with their satellite signals but I'm sure that you will pay huge dollars to use them.

Comment by titeuf007 on December 28, 2014 at 5:32pm

could you share with us you gps file ..

i would like to use you setting on mine


3D Robotics
Comment by Chris Anderson on December 28, 2014 at 5:55pm

Great analysis and report. Respect!

Comment by unnamed idea on December 28, 2014 at 6:05pm

Thanks Thorsten, it was very informative. I wonder if you have data about partial weight of gps vs other satallite services used during this testing. I currently use only GPS signals (with neo 6m) and I wonder how obtaining data from all other services like galileo effects the performance of quadrotor. (I live in europe if it matters). Thanks again!


T3
Comment by Thorsten on December 29, 2014 at 3:29am

@Phill, yes, NADIR is really important and makes things much easier. Apart from addition weight the problem is a proper calibration of the gimbal. Even with a level it is never perfect. I am thinking of building a special calibration stand for that. However, I am quite happy with the stitching results. So this is maybe some overkill.

@Doug, I ordered a Lidar lite. But the range is limited and there seem to be some problems with interference with the GPS. The other thing is that a LIDAR adds additional weight to the system. Our copter flies 45-50 min with payload (brushless gimbal + camera). Hence, I am rather trying to reduce weight. The new landing detector also works perfect so even for landing (a copter) a LIDAR is not really required for this type of missions.

There are some ideas to fix the problem of the pressure drift. I hope there will soon be a fix. 

@titeuf, you should find one Peter's M8 discussion on drones-discuss. I am using the u-center tool to change the settings. But I will post it the next days.

@Chris, thanks!

@unmanned idea, I might be wrong, but as far as I know it is GPS only.


Developer
Comment by Antonius Lourenço Kasbergen on December 29, 2014 at 8:17am

Thanks for all the info! Are you using any simple stitching software or its a custom routine? 


T3
Comment by Thorsten on December 29, 2014 at 9:23am

@ALK, my pleasure!

It is a custom routine. Existing software was to slow and/or produced more artifacts compared to the GPS based stitching presented. Moreover, most available stitching routines do not provide georeferenced mosaics. I didn't mention it because I wanted to focus on the positional accuracy. 

The software would be worth another post. It is based on my experience with my entry for the T3 Season 2 - The Model challenge. Since the workflow was rather clumsy and complicated, we decided to develop an easy to use and fully integrated package. It is a our "in-house" tool for image preprocessing, image quality control, geo-tagging, stitching, generating DSMs and orthophotos, data fusion, generating vegetation indices, ... It might be of special interest for this community because it provides a seamless integration with the Pixhawk autopilot. We will see.


MR60
Comment by Hugues on December 29, 2014 at 11:13am

Hi Thorsten,

Thank you for this analysis.

I have a question about your method to find the "real position of each picture". You correct the logged positions with various error & offset corrections due to camera triggering delay, pixhawk cam_trigger delay, etc...

It seems to me that to get the most exact position for each picture is to use the exif picture time data and simply add/substract the GPS time offset versus the camera time. This method will indeed "include" all of the corrections for delays in the triggering chain. And it is also much more simple to do. So why don't you use that method instead ? I would be curious how it could improve your results.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service