3689584519?profile=originalWe did a quick evaluation how much accuracy we could achieve on all axises using a multirotor. We read many accuracy reports from fixed wings and this teaches us that the planimetric accuracy (horizontal) is usually about 1x the ground sampling distance (GSD) (0.5 if you're really good and have better cameras) and that the vertical accuracy (Z) is usually 2-3 times the horizontal accuracy. That's only valid for some altitude ranges, the regular flight altitude for uav's between 80-150 meters. Forward velocity and trigger distance requires a certain altitude to make it work.

Here we lowered the altitude from 80m to 40m and used a multirotor. We wanted to find out whether the vertical accuracy definitely would improve and hopefully establish a 1:1 relationship between vertical accuracy and GSD as well. The reason why vertical accuracy would improve steadily is because there's more perspective in images at lower altitude, so you pick up more height information in each image, which corresponds to better Z estimates.

In this example case we flew at 45 meters with a hexa at a speed of 3 m/s to get a high 85% forward overlap, making it more difficult for a wing to do the same. 211 photos were taken. The GSD produced is 1.41cm.

The photos were georeferenced using 5 marker points that were collected with high precision GPS equipment. The expectation is that when these GCP's are marked in the image, there's about a 0.5-1 pixel deviation, so it's expected that the error in marking GCP's is about 0,5-1 GSD as well. Sharper pictures and better markers reduce that error.

In this case we had 2 less accurate GCP's, so the planimetric accuracy of this dataset eventually became 1.7cm, slightly above 1* GSD. What we confirmed though is that we got a 1.8cm vertical accuracy for this set, (or rather, the residual error from the mathematical model).

This dataset could have been improved as follows:

  • Better marking of GCP's and more attention paid during GCP marking.
  • Sharper photos (better lenses).
  • Higher precision GPS.

In the end, the maximum accuracy that one should expect with this equipment is 1* the GSD and better equipment isn't going to make this magically happen. This accuracy isn't correlated to the real world, that would be a totally different exercise altogether.

Here are some detailed images of the point cloud from close up. Notice the vertical detail of the irregular curb.

3689584560?profile=originalAnd the detail of a house. The planar surfaces aren't warped, a good indication of excellent georeferencing and accurate point triangulation.

3689584540?profile=originalThis experiment is very relevant, because Lidar is commonly used for "real" precision projects, often considered for work where they need better than x cm precision. Although lidar data is probably as accurate as 5mm, it is also subject to occlusions and the station needs to be moved around a lot to get proper point cloud coverage, so operating it isn't all that easy.

UAV point clouds may always have less accuracy than laser clouds, but they do have the advantage of the bird's eye view: they generate data across an entire region with the same consistency in accuracy and density, whereas lidar isn't that consistently dense for every part of the survey area due to occlusions.

Price makes a big difference too. Lidar stations apparently cost $175k to acquire, whereas uav's probably put you back by $3000. The question that one needs to answer is whether the slight improvement in accuracy is worth the extra money.

What this experiment shows is that also for uav's 2cm vertical accuracy is probably within the possibilities, pending further experiments where datasets are compared against points in the real world.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @Walter:  I think the most straightforward use of a GCP is to directly utilize the clicked 2D points in a 3D triangulation effort with a known 3D location.This would avoid ambiguities further down the line if some GCP's are inaccurate and what to do with intermediate points between a good and bad GCP. So if you have the GCP part of the model already you let the bundle adjuster take care of how the camera matrices are adjusted to reduce the overall error. The GCP's basically determine the Translation vector part of the matrix and the distance between camera's.

    In the Bundler software, the bundle adjuster allows for constraints to be set on the variation of certain parameters. So if you have known good information you allow those parameteres to vary little, versus other parameters that may have high uncertainty. In fact, the software I was using had the same for GCP, where you input the tolerance. That tolerance is potentially used to allow the GCP to shift also a little bit to make the model fit better (from a mathematical perspective). This also makes clear that if you don't use the right tolerance, it can lead to much larger errors, especially inbetween GCPs, because it'll probably warp to get things to fit.

    I work with locked focus at infinity and focusing does take the most time and is the most uncertain to complete correctly. Exposure probably in the majority of cases finishes in equal time intervals, but I've never done any tests to confirm this. I'll have a look at incorporating a log file next time so I can check if that interval is definitely equal. I think if the weather is cloudy you probably get good similarity, but if there's hard sunlight there may be cases it has to make severe adjustments between pictures; for example flying over grass and dark earth and then over a grey/white pavement makes an enormous difference in shutter time. Does the camera resolve the exposure calculation in equal time?

  • @Gerard: I think we have the trigger time/geo tagging pretty much under control now. If you set your camera to manual focus you should get a reasonably constant time off-set between moment of trigger signal and moment of actual exposure. As long as the latency is constant it can be treated as a systematic error and thus dealt with accordingly.

    Currently my main concern is that the Multiview Stereopsis or Structure from Motion software suites do not explicitly tell you how the ultra-precise camera exposure positions are actually utilized in the point cloud generation. If camera exposure positions are only used to determine a linear transformation between model coordinates and GCP coordinate system, then you don't really need the extra precision because, as you already stated, and assuming random errors, the average of 300 precisely measured camera exposure positions will not significantly differ from the average of 300 approximately measured camera positions. It would be really nice if the precisely determined camera positions could be used as weighted observations in the point cloud generation. I doubt whether this is the case though.

  • @Walter: What bothered me most was the time spent collecting the points. As GPS's get more precise there will very soon be occasions where GCP's aren't going to be necessary in all cases (even cadaster applications perhaps), if the position can be determined with high accuracy.

    The biggest hurdle probably is to figure out the precise delay for the camera. For CHDK, it's not too difficult to get up to 10ms precise timing (that's about the resolution it can offer). Since most of the times the exact timing of exposure is a bit of a guess, the CHDK trigger mechanism can figure out how much of a delay was caused due to exposure measurement and figuring out details. It can then write a custom value to the JPG file which thus records the interval between the trigger and actual exposure.

    What needs to be figured out is the exact position the trigger was sent at, but we have interpolation to do that. If you assume that in 200ms the vehicle moved linearly, the exact timing for the trigger can be derived. Usually this timing will be equal to reading a new GPS location, because I use the gps distance trigger.

    Then you can extract the ap log and the camera log and add X ms delay, read from the cam log, to improve the actual position. In this overall system, if the time is determined up to 20ms precise, it means a precision of 15cm for the location of the photo, not bad! If that error has a normal distribution then with more pictures taken that error eventually approximates zero. (bundle adjustment will take care of it).

    Sounds like an interesting experiment

    How is it useful?  There are places where people survey that are not (easily) accessible by foot or are too uncomfortable to be in for longer periods of time (jungle, insects, etc.)

  • To determine the accuracy of your map reliably the accuracy with which you determine the coordinates of the check points should be significantly better than what you expect the accuracy of your map would be. So if your GSD is 1cm, the accuracy of the method you use to determine the coordinates of the check points should be less than 1cm - a rather challenging standard! Short range (<10km) dual frequency kinematic GNSS surveys (real time or post-processed) with resolved integer ambiguities typically give accuracies of 5mm + 1ppm of the distance between reference station and rover. For example, if your base is located 1km  (1 000 000mm) from your site you should expect 5mm + (0.000001x1000000)mm=5mm+1mm=6mm. This assumes that all your staff bubbles are properly adjusted and that you can center your receiver properly over your check point.

    Using real time dual frequency GNSS positioning services will generally give you sub 2cm accuracies.

    If your GSD is less than 10cm you should thus use differential GNSS solutions for which the carrier ambiguities have been resolved. Any other GNSS (or GPS) method will not give sufficient accuracy. That is why we have developed the V-Map system a light weight, simple to operate dual frequency GPS receiver that can be used on board any UAV to determine a very precise trajectory at 5Hz positioning rate or as a rover on a normal survey rod to accurately survey GCPs and Check Points.

    If you don't have carrier phase GNSS equipment you may consider using a total station. 

  • 10 cm post-processed accuracy sounds about right. That's what I get with my Trimble system after differential correction. I'd be curious about their equipment. Mine is Trimble Juno 3B with a Pro XH receiver. I can collect averaged positions too, but typically only need to collect 60 second shots.

  • These GCPs were taken from a ground survey, although I couldn't get a straightforward reply from the guys how accurate it is supposed to be. It's a system comparable to D-GPS. In this case though, the system sends measurement to a central server which sends corrected positions back (so it doesn't transmit the error information), it is called RBMC-IP. Each survey point took 10 minutes to sample and to gather the corrected position.

    From what I gathered the accuracy should be 10cm or better. We're going back to the site in 2 weeks to validate this data, then I should be able to find out how accurate these measurements are to begin with.

  • What method are you using to establish GCPs? Are you using flight logs or are you using external GPS shots?

  • For those looking to compare DSM/DEM models from one sensing method to another, remember that LIDAR and the photogrammetry-derived raster DSM/DEM are based on points which are interpolated. The native format of the data is not a raster. As such, you need to make sure that the interpolation methods are similar and comparable. You can create tons of raster DSM/DEM from the same point dataset and get wildly different accuracies.

    Also, here are some references for accuracy reporting.

    ASPRS LIDAR Accuracy Document (still applicable here):

    http://www.asprs.org/a/society/committees/lidar/Downloads/Vertical_...

    FGDC Accuracy Reporting:

    http://www.fgdc.gov/standards/projects/FGDC-standards-projects/accu...

    http://www.asprs.org/a/society/committees/lidar/Downloads/Vertical_Accuracy_Reporting_for_Lidar_Dat…
  • Very interesting discussion. 

    To get a robust and objective geometric accuracy certification (with 95% confidence level) of a geo-spatial raster product  you need at least 20 check points (over and above those GCPs which you use for geo-referencing). The check point coordinates have to be determined with a method that is more accurate than what you could expect from the raster product. So if your ortho photo has a GSD of x, the the method you are using to establish control coordinates should have an accuracy of better than x. From experience I have found that an analysis based on how well the GCP coordinates fit the model is way too optimistic. Using at least 20 check points to calculate the Approximate Horizontal Circular Error and the vertical accuracy (i.e. 1.96 x RMSE) at 95% confidence level yields much larger error estimates - in my experience about 2 to 3 times GSD.

    The advantage of going to the trouble of adding these error statistics to the meta data of your product is that the product, not the producer, is objectively documented and certified. This approach to quality control will drastically reduce the cost of mapping, open up the mapping market and curtail the powers of exclusive "professional" organizations who tend to reserve general mapping tasks to themselves by means of lobbying for restrictive regulations.  

  • @me: Yes, it's the internal accuracy of the model and probably some mean retriangulation error for all matched faetures in all images after removing outliers. In scenarios where no GCP's were used, then the solution has more freedoms to converge towards a fit. In this case 5 GCP's were used which puts specific constraints on some points in the model. There were 12-15 photo matches per GCP.

    Indeed, I don't see this number as an immediate indication how accurately points correlate with the real world, but more of an indication how well the extracted features matched a model after using algorithms like linear least squares to reduce overall error, so it's not a magic number to me. This error is usually calculated from the sparse point cloud (the one doing the feature matching). This model is correlated to the real world by GCP's which each have their own accuracies. So the final accuracy is determined by:

    1. how well features were recognized and matched in the source material (adds some error, as resolution is finite),

    2. how these features were fit into the model (doesn't fit perfectly),

    3. the accuracy of GCP points

    These issues are interrelated and here it only discusses 2. Eventually though, the error won't be 1+2+3, it needs to be measured from points in the field. It can be max(2,3) or just as well a number slightly lower than the mean of (2+3)/2. As indicated, this is left as a separate exercise. Important, but this focused only on 2 for now. I consider it safe to assume that for the majority of points in the field (not the building), the mean error won't be larger than 2+3, do you agree?

    About "more perspective":  the resolution and angle at which you observe objects from the side becomes more favourable. When you fly higher, you can achieve the same angle, but the resolution is poor and it will be nearer the edges where you have more chromatic aberration. So flying lower gets these objects closer to the middle in some photos and with more pixels, generating better Z estimates. I didn't get the baseline/altitude ratio comment, that'd only apply if you use stereo images only for depth estimations?  Here we use a whole range of monocular images far apart (about 15 per 3D point, up to 35 in one case).

    One issue that comes up for lower flights though is that the performance of the orthomosaic decreases. So flying lower is good for getting a better point cloud, it's bad for orthomosaic because you get more extreme perspective differences. The orthomosaic is usually built up by chosing fragments of images covering that area. If you imagine  two photos taken 1 meter above a building with one photo taken just before crossing the roof edge and the other just after, then image 1 has the outside wall in the image and image 2 has the roof protruding out too far covering parts of the sidewalk. Both images cause issues for using that orthomosaic for horizontal distance assessments. That issue will not occur if you don't have (relatively) high obstacles. In this case, I flew at 45m and the building was like 15m, so I do see distortions and these issues appearing.

    For lidar I didn't think of mobile lidars, but only considered stationary lidars on the ground. There's been a pix4d white paper comparing lidar with uav remote sensing, hence that specific frame of mind. That same frame of mind explains why I mentioned how occlusions on ground-based lidars are an issue. For airborne lidars you also need to incorporate position and attitude uncertainty into the measurements.

    Those papers comparing the results are here:

    https://www.sensefly.com/fileadmin/user_upload/images/newsandpress/...

    http://pix4d.com/accurate-uav-surveying-determining-stockpile-volumes/

    https://www.youtube.com/watch?v=ENGMNGnFFr4

This reply was deleted.