Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture

Thanks to all of your for your great feedback on my last blog post on airframe selection. There has been a lot of interest and misinformation about NDVI images on here recently, so I thought the topic deserved its own post. Briefly, I explained a little bit of history, why NDVI may not be the best index for UAVs, and showed some ground-truthing we did with NDVI maps. It ended up being a little too long to summarize here, but check it out at http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collec...

I would encourage anyone interested in the topic to check out some of Compton Tucker's original papers on the subject as they are really illuminating as to how the index came about. There is a lot of home research on Public Labs dedicated to making NDVI work with different systems, but NDVI was never a golden standard and different equipment may require different image processing metrics.

Views: 8387

Replies to This Discussion

Jesus and Daniel, Keep in mind ImageJ is an image processing package focused on the biomedical field and QGIS is a GIS. Very different applications and audiences and neither are ideally suited for what we want to do. In any case QGIS is my primary GIS these days and it provides hooks to lots of other software including some very powerful remote sensing focused image processing packages. A new set of QGIS tutorials that people seem to like are here: http://qgis-tutorials.mangomap.com/. The documentation on the QGIS site is decent too. It would be great if someone was interested in creating an image processing package for small format aerial image processing. Not a small task.

Daniel, Thanks for posting easy to understand instructions. I just updated the GitHub intro page with instructions. I expect you're not the first person to run into a problem but your the first to let me know about it. 

If we have 5 different people process the image we'll probably get 5 different results it there is no sort of calibration or standardization in the methods. When I did the processing the only eyeballing was picking the tip and tail of the histogram. There were very few saturated pixels on with end (0 or 255). 

You seem to have a camera system with filter and white balance optimized for DVI. If I am interested in using that for NDVI I could either use a different filter/white balance setup or I'd create a processing chain to make reliable NDVI images. The information is in the image to create good NDVI or good DVI products. If we want to do a reliable test of advantages and limitations of the two algorithms I don't see anyway around using calibrated images. I think the point-and-shoot calibration is still too experimental but I'm sure we can get some images from a calibrated camera to use for testing. 

Reducing resolution is sensible in certain situations but I do follow what you mean by making 10m x 10m grids and overlaying them over the visible.

I'd like to add another mosaic. With more time on my hands, I tried two things based on the feedback earlier in the thread:

  • I played with the DVI scaling parameters in the plugin till I got a more realistic value (in this case 0 to 128) - this fixed the saturation problem and I now have an image that contains as much information as the NDVI version.
  • I removed the shadow parts from the NRG and ran the NDVI process on the 'de-shadowed' version.

The result is two images that on the face of it contain very similar information and apparent usefulness. Daniel's original claim that DVI is very effective in dealing with false NDVI from shadow is completely backed up by these images. On the other hand, using a colour-replacer batch script on your images after defining the colour representing shadow and setting tolerances will also produce shadow-corrected NDVI images.

Attachments:

Hi John,

What did you do to remove the shadows? Did you do it manually?

Hi Dan

This was actually Ned Horning's idea, he suggests it earlier in this thread. 

I used a simple colour replacer tool. Most bitmap editors have some version of the tool - you select the shadow colour and a neutral colour, set the tolerances for the tool and then replace the shadow areas by dragging the tool over them. I did it manually for this single image but for a collection of images I'd simply record a macro/action and run it on the whole batch. 

Dear Daniel

Congrats for good work (Ned you to, amazing job)

Daniel i tried to run your script for DVI index and ran on this error. Enclosed it is an image with the error.

thanks

jl

Attachments:

Hi Joscht

I believe you are trying to select your image when prompted for a LUT (lookup table). Download the Agribotix_LUT from one of the previous posts on this thread or use any other LUT (they use .lut as an extension) and open that instead of the image. The script will ask you to select the folder containing your images after that step.

Best,

Daniel

Works amazing

thanks

Hi Daniel,

I enjoyed your well-written article.

I have been playing with the BG3 filter.  You mentioned that the new Event 38 custom filter will give a better Green channel, and I can certainly see this by looking at the published transmission curve.

However, I saw that the custom filter's transmission curve shoots up at about 670nm, reaching almost maximum at 700nm.  This compares to the BG3 which doesn't reach maximum until about 740nm.

See the attached vegetation spectral curve that I found on the internet.  Assuming this is correct, then you really want the transmission curve to go up not before 710nm.  Otherwise, your NIR channel will be collecting light to the left of where the curves for healthy and unhealthy vegetation cross each other.  In this region, UNHEALTHY vegetation will be contibuting to the NIR channel MORE than HEALTHY vegetation, which is the opposite of you want. This will reduce the effect of everything to the right of that crossover point.

Going back to the transmission curves mentioned at the top of my post, it would seem that the BG3 is actually better in this regard.  I guess that what's going on to the left of that crossover point is minor compared to the huge wide band of NIR wavelengths above say 710nm, so the BG3 is only *slightly* better in the NIR channel, and it's no big deal.

I guess what I am trying to ask is:  Why is the new custom filter from Event 38 better than the BG3?  Yes, it can see more Green, so you can use Green as the visible channel instead of Blue.  I'm guessing that is the key here.  So the question then becomes:

Why is it better to use Green than Blue for the visible channel?

Thanks for this interesting discussion!

Richard (in Australia)

Attachments:

Hi Richard,

For a camera equipped with the Event38 filter, I believe green is a better choice than blue because the blue channel picks up quite a bit of NIR signal. Based on my fairly unscientific experiment of imaging a hot stove that was not emitting in the visible and quantifying pixel intensity, I saw that the blue channel picked up about 80% of the NIR of the red channel and the green channel only picked up a small fraction. I don't have the exact numbers handy, but you should do the experiment yourself if you're curious.

That said, for both the Schott and Event38 filters, the channels are really blue + NIR, green, and NIR. A more or less intact green channel seemed valuable for calculating vegetation indices, but, if you play with my script, you'll see there is really no difference in between using the blue or the green as the visible channel, so it probably doesn't matter which one you choose.

I haven't compared the two filters in detail, but I have done significant ground truthing with images taken using the Event38 filter and the NIR-VIS index and can say with confidence that we are getting solid results. I will show some of them off at DroneCon, if you'll be there, but if not I'll write up a blog post in the next week or so. I think you would likely get good results using the Schott as well.

I would also be careful with plots like the one you attached. There are a lot of reflectance spectra out there that try to present the information too generally. What is a healthy plant? What is an unhealthy plant? The sensitivity of the detectors in different cameras are also not widely published and I think the solution to developing a reliable vegetation index is just going out and collecting a bunch of images and seeing which ones correlate to what's going on on the ground.

Good luck with your imaging and let me know if I can answer any more questions!

Best,

Daniel

Canon sx260 + Event38

Agribotix script and LUT

Mais 

Attachments:

Hi Antonio, 

If I may ask what image processing system are you using? Fiji (ImageJ)?

Thanks,

Jim

RSS

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service