We did a quick evaluation how much accuracy we could achieve on all axises using a multirotor. We read many accuracy reports from fixed wings and this teaches us that the planimetric accuracy (horizontal) is usually about 1x the ground sampling distance (GSD) (0.5 if you're really good and have better cameras) and that the vertical accuracy (Z) is usually 2-3 times the horizontal accuracy. That's only valid for some altitude ranges, the regular flight altitude for uav's between 80-150 meters. Forward velocity and trigger distance requires a certain altitude to make it work.
Here we lowered the altitude from 80m to 40m and used a multirotor. We wanted to find out whether the vertical accuracy definitely would improve and hopefully establish a 1:1 relationship between vertical accuracy and GSD as well. The reason why vertical accuracy would improve steadily is because there's more perspective in images at lower altitude, so you pick up more height information in each image, which corresponds to better Z estimates.
In this example case we flew at 45 meters with a hexa at a speed of 3 m/s to get a high 85% forward overlap, making it more difficult for a wing to do the same. 211 photos were taken. The GSD produced is 1.41cm.
The photos were georeferenced using 5 marker points that were collected with high precision GPS equipment. The expectation is that when these GCP's are marked in the image, there's about a 0.5-1 pixel deviation, so it's expected that the error in marking GCP's is about 0,5-1 GSD as well. Sharper pictures and better markers reduce that error.
In this case we had 2 less accurate GCP's, so the planimetric accuracy of this dataset eventually became 1.7cm, slightly above 1* GSD. What we confirmed though is that we got a 1.8cm vertical accuracy for this set, (or rather, the residual error from the mathematical model).
This dataset could have been improved as follows:
- Better marking of GCP's and more attention paid during GCP marking.
- Sharper photos (better lenses).
- Higher precision GPS.
In the end, the maximum accuracy that one should expect with this equipment is 1* the GSD and better equipment isn't going to make this magically happen. This accuracy isn't correlated to the real world, that would be a totally different exercise altogether.
Here are some detailed images of the point cloud from close up. Notice the vertical detail of the irregular curb.
And the detail of a house. The planar surfaces aren't warped, a good indication of excellent georeferencing and accurate point triangulation.
This experiment is very relevant, because Lidar is commonly used for "real" precision projects, often considered for work where they need better than x cm precision. Although lidar data is probably as accurate as 5mm, it is also subject to occlusions and the station needs to be moved around a lot to get proper point cloud coverage, so operating it isn't all that easy.
UAV point clouds may always have less accuracy than laser clouds, but they do have the advantage of the bird's eye view: they generate data across an entire region with the same consistency in accuracy and density, whereas lidar isn't that consistently dense for every part of the survey area due to occlusions.
Price makes a big difference too. Lidar stations apparently cost $175k to acquire, whereas uav's probably put you back by $3000. The question that one needs to answer is whether the slight improvement in accuracy is worth the extra money.
What this experiment shows is that also for uav's 2cm vertical accuracy is probably within the possibilities, pending further experiments where datasets are compared against points in the real world.
Comments
Hi Thorsten, please let us know when you have done it and give us some feedback on it.
Thanks Gerard, this is important research! I am planning to test and compare direct georeferening using different GPS systems. Since I need some GCPs as a reference anyway I will also run some similar test as the one you presented. Additionally I am planning to compare different cameras. There is a test site where a LIDAR DEM is available. So I can compare it directly. However I am not sure about the resolution of the LIDAR data in that area. I'll report.
Hey Martin,
I agree with the researchers you spoke to about LIDAR being better in accuracy than photogrammetry, but the cost differences are what makes DEM/DSM generation from photos so attractive. LIDAR systems run 100K-300K which is great if you have the resources, but a lot of small companies or local government could really benefit from a DSM generated at a fraction of the cost and with decent accuracy from imagery.
I must agree though, penetrating canopy forest is a huge advantage of LIDAR and one reason it is always more desirable over imagery if you can afford it. I hope LIDAR sensors continue to come down in price as demand increases with UAV use, only time will tell!
Could you give us some explaination about how do you measure the accuracy in the model? do you think that the RMS found in GCP after photo alignment or bundle adjustment is a good estimation, I'd say you are some wrong.
I'd also ask for clarification regarding this: "there's more perspective in images at lower altitude, so you pick up more height information in each image, which corresponds to better Z estimates..."
Accuracy is linked to several variables, you have pointed correctly some of them, but you have missed the ratio altitude/baseline and focal lengh that is also very important.
Airborne Lidar accuracy depends not only on the ranging principle and technology itself but on the accuracy of the system that gives absolute position to the sensor. You cannot say a single lidar measurement is 5mm accurate when it is very unlikely you knew with such accuracy where the plane it was in a certain time-stamp. In photogrammetry all the frame is captured in the very same instant(with permission of Mr. rolling shutter), so in theory every single point(ray) in the scene can be used to estimate that unique camera position. In lidar, each pulse takes a round trip ticket from plane to ground being unknowns the actual position and attitude of the plane for every single pulse... so your accuracy basically relies on IMU/GPS that it is to say every lidar point measurement needs to be "corrected" by external records. You can use ground control to perform some post correction but is false your statement that lidar has to be more accurate than photogrammetry. In fact, the same IMU/GPS gadgets are used to deliver just first estimations of camera orientation in regular photogrammetric projects, being replaced by the more accurate results of the ATM after all.
I'd say there is another misconception in your post when you say occlusions are more of an issue for lidar than for photogrammetry... why do you say that?, don't you realise that for a point to be visible to lidar, it just needs to be hit once, on the contrary for photogrammetry it must be seen twice at least? don't you see the former is much more likely to collect a certain point than the second? just carry the lidar through the same flight path as tha camera and slow enough and will see that less occusions occur.
You're talking about internal precision of the model right? Not fundamental accuracy? Did you evaluate accuracy using independent high-precision GPS points not used for georeferencing? How did you calculate these metrics?
Your error certainly varies by the type of material you're imaging and spatially within your model. If it's grass or tall vegetation you're likely sunk because most of the points you extract from the images will be of the top of vegetation. Points collected on a slope may not be as accurate as a flat surface. Similarely, darker pixels may not be as accurate as bright pixels, or vise versa. LIDAR is not much better but can penetrate the canopy somewhat, mostly because the beam finds itself through the vegetation without being scattered. The LIDAR pulse is not going through the leaf. Even with LIDAR, grasses and other dense even land covers are a problem, especially when the half-pulse length is longer than the height of the vegetation.
Any geospatial product is only going to be as good as you georeference it. What type of GPS are you using?
Very interesting work, I'd love to see more!
ziptied to the battery bay. At 3 m/s you don't get a lot of pitch/roll. Also, CG is pretty low below the props in comparison to agile quads.
Did you use a gimbal for these pictures or just a fixed mount for your Canon?
I talked to some guys who are doing mapping with a Cessna. They are using a special (very high resolution) mapping camera and a LIDAR. They told me that the DEM you get from a regular camera is in no way comparable to what you get from a LIDAR in regard to accuracy. One important aspect was that you can get a DTM with a lidar because it works through the canopy (not in the summer obviously).
Michael: long endurance multirotors exist and they're not necessarily horribly expensive. Look for > 14/15" prop size, light frames, low-kv motors of good quality and light batteries (NCR<whatever> or the Maxamps 11Ah 5C). Lower C ratings equate to lower weight at the expense of lower current output, so you can't do weird stuff on that.
Mine stays aloft about 35 minutes, three times what I needed for this mapping run, or enough for 400x400 from a higher altitude.
Joseph: I aim for 1/1000 shutter times. It's hanging from a battery bay that's attached with vib dampers using zipties. All images in my set are in focus and used, no discards (this time). I think with bright sun and more wind there'll probably be some.
I am very impressed with what can be achieved nowadays with the technology that we have available. We have been "playing" around with Quads and Hexas together with Photoscan which has produced some remarkable results. Up to now we have used the Autopilot systems from ZeroUav, but after seeing what the new Pixhawk can do we have decided to go the Pixhawk route....much cheaper! Our only concern is the time that our Multirotors can stay aloft......max 15mins!