Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture

Thanks to all of your for your great feedback on my last blog post on airframe selection. There has been a lot of interest and misinformation about NDVI images on here recently, so I thought the topic deserved its own post. Briefly, I explained a little bit of history, why NDVI may not be the best index for UAVs, and showed some ground-truthing we did with NDVI maps. It ended up being a little too long to summarize here, but check it out at http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collec...

I would encourage anyone interested in the topic to check out some of Compton Tucker's original papers on the subject as they are really illuminating as to how the index came about. There is a lot of home research on Public Labs dedicated to making NDVI work with different systems, but NDVI was never a golden standard and different equipment may require different image processing metrics.

Views: 8382

Replies to This Discussion

Daniel, this is a great write-up.  I've been testing NDVI as a workflow and have found a lot of the same issues you have, but assumed it was operator error.  I'd love to see the UAV community push a new standard index through to be tested and verified by research facilities!

Daniel, that was a fantastic post. Clear, insightful and full of excellent examples. Bravo!

Yes, great write up. Thank you. I just got out to a field and tested, for the first time, my SX260 with a NGB filter from Event38 (had fun modding that camera... stupid glue). I cropped the images to be field sized and ran them through Fiji (ImageJ) to see what I could see. Damn. It was not much. I'd be interested to here if there is a way to do the NIR-VI comparison in Fiji. Is there? (Also, any advice on programs would be great). 

Love the clarity of your article, thank you.

Thanks Daniel for taking the time to illuminate these points. There is certainly a lot of research still to be done on finding good remote sensing techniques using inexpensive consumer equipment. Especially since US university research with UAVs  is essentially grounded at the moment, anything shared with the public and not kept proprietary is a great benefit to all in the community. So thanks again.

If I may contribute a few definitions for anyone interested in the etymology and reasoning behind the name Normalized Difference Vegetation Index (NDVI).

(I am paraphrasing these from "Remote Sensing of Vegetation: Principles, Techniques, and Applications" by Hamlyn G. Jones, they may well be a post-hoc explanation of the names, but I found them useful for guiding my thinking). 

  • Vegetation Index - any number that is computed from the spectral response at a given pixel that is meant to correlate to some aspect of plant activity in that pixel
  • Difference Vegetation Index - any vegetation index computed in the form of X minus Y.  Generally the numbers for X and Y are measured reflectance or radiance levels at different wavelengths of light, e.g. NIR - Red, but could equally well be Green-Blue.
  • Normalized Difference Vegetation Index - a difference vegetation index that is normalized (i.e. re-scaled). Normalization is a standard data processing technique. It is used to map the numbers into range you want for analysis or to remove the effects of some confounding factor. In the case of the classical NDVI, the normalization is meant to try to remove the effects of differing illumination levels. 

So given these definitions, what we might call the classical NDVI is really just one of many vegetation indices that could be computed in that style. 

So why do I bring all of this up... really just to complement what Daniel/Agribotix said in their post. Classical NDVI is not *the* answer for measuring plant health. Its really just one of any number of computations you could do. It has been found useful for some applications in the past, and other computations are not inherently better or worse, just... different.

Oh, and by the way, and I don't mean to be negative, just to offer constructive criticism. Be careful with the custom color maps used in several of the images. They can easily mislead someone into thinking there is more information present than there actually is. What is the difference between Yellow and Green, or between the Red and Black? Very little probably, thats just where somebody decided to draw the line and start classifying things as one type of material vs another. Unless based on some underlying physical principles, those choices tend not to carry over from one scenario to the next. Check out http://www.cs.ubc.ca/~tmm/courses/533-07/readings/pravda/truevis.htm for some good theory on how to choose color maps.

@Daniel,

I second this!

I played around with different indices. Keeping it "simple" and using NIR-VIS to make vegetation maps and identify areas of different vegetation density is probably the best way to go. If you are looking at specific plant diseases I suggest to try to build specific regression models based on all channels available.

Kind regards,

Thorsten

Daniel, It's nice to see this discussion about NDVI and an alternative. I'll throw in a few additional thoughts and try to point out some more misconceptions. The article gives the impression (in the description and a figure) that plants reflect roughly the same amount of red and blue light. Although that's the case for healthy green vegetation it's not accurate for stressed and dead vegetation. There is a graph here: http://i.publiclab.org/system/images/photos/000/002/097/original/4G... that contrasts dead grass and a pine board to green grass and you can see there is significantly more red light reflected than blue light. This and the fact that red light is attenuated less as it passes through the atmosphere tends to make the red band a better choice for general purpose vegetation health monitoring. I mention this to highlight that NDVI is calculated using NIR and rad bands not NIR and “visible”.

There is a lot of variation in the way people calculate NDVI. Ideally you would use imagery captured using narrow band red and NIR-pass filters calibrated for surface reflectance. Most hobbyists acquiring imagery using small UAVs use very broad band point-and-shoot cameras and no calibration. With those tools you can produce very useful one-off image products for monitoring plant stress and other things but they are not well suited for comparison over time or with other NDVI products.

Using a vegetation index based on the difference between NIR and visible (NIR-VIS) will tend to work ok if the area being imaged is evenly illuminated (e.g., flat, no clouds or shadowed vegetation) but any vegetation that is shadowed will have lower NDVI values than an area of an image that is unshadowed. Both the ratio and difference approache for calculating VIs have issues. Neither of these will work predictably unless the imagery is calibrated in some way. The calibration can be absolute using calibration cards or panels or it can be relative by setting a white balance or stretching a histogram to make it look good.

There are several difference between satellite image derived and point-and-shoot NDVI. As the article points out one is that the satellite signal is dealing with the full thickness of the atmosphere but “proper” NDVI is calculated after removing/reducing those and other radiometric effects. More significant are the calibrated narrow (compared to point-and-shoot cameras) bands used in satellite sensors and satellite sensors tend to have much coarser resolution when compared to flying with a small UAV. NDVI is very sensitive to the geometry of incident and reflected light relative to the feature so when you image a blade of grass the NDVI values will vary simply because the blade is curved. This isn't an issue for many satellite sensors since they're imaging a relatively large area and at that scale coarser features like slope and aspect are more important than leaf position. Close-range imaging from a small UAV platform introduces complexities not present in satellite systems. Ultra-high resolution sounds like a great thing since our brains process spatial data more effectively than spectral data but computer algorithms are the opposite. As they say, the devil is in the details.

NDVI is still the most prolific vegetation index out there. In fact, the first book dedicated to NDVI was just recently published. Some of the more sophisticated products like FAPAR are in use by a few people but they are much more difficult to calculate than NDVI and this complexity is magnified with close-range imaging. If you have a specific parameter you want to map/model then Thorsten's suggestion of using regression models is the way to go.

Hi James,


I wrote a script for Fiji or ImageJ that will do NDVI or NIR-VIS in batches that I can post here. It's a really good tool to compare vegetation indices over the different visible channels. One thing you will see when you start to compare is that field moisture really throws off NDVI, but doesn't really affect NIR-VIS.

Right now my script has no GUI and is not commented properly, but I will spend some time today and tomorrow to get it ready for a general user and will post it here.

Best,
Daniel

PS Good choice on the Event38. You get a much better green channel with that filter relative to the Schott BG3.

UAVStuff,

I totally agree! I will continue to post our results here and on our blog, but it's great to see others have come to the same conclusions below. While there is some research underway in this area, we as a community can certainly contribute a new standard. For the foreseeable future for us, it is NIR-VIS and it would be great to continue the discussion as the growing season progresses.

Best,

Daniel

Hi Taylor,

Point taken about the etymology. I should have called NIR-VIS the DVI (Difference Vegetation Index), but since there is so much consensus about the value of that index, we should call it the DIYDI or something like that.

I totally agree about the false coloring. The LUT (look up table) is totally arbitrary on all of these images. I prefer to look at them in B&W, which is familiar to me having done so much image processing and analysis for my PhD research, but many people not familiar with the topic really like seeing color. However, the histograms of all of the images I posted are stretched to fill 16 bits so there is more consistency in that regard and the LUT does fairly accurately reflect what what I said the colors mean.

With regards to what UAVStuff wrote above, we should try to settle on a standard for image processing and presentation as a community. I will post my ImageJ scripts this evening or tomorrow once I've made them a little more user friendly and they could serve as a good start.

Best,

Daniel

Hi Thorsten,

I totally agree and thanks for all your examples. Is the brown band a road, the whitish band an irrigation ditch, and the bottom and top part corn?

We have seen instances where the NDVI images really highlight a certain type or weed or soil moisture problem that the NIR-VIS images miss, but in general the NIR-VIS images seem to give the most consistent, reasonable results. As a community it would be great to compile a library of weeds and crop conditions that can be detected more easily using various indices. We have a few we could contribute now. What do you think the best way to store and verify this information is?

Best,

Daniel

Hi Ned,

Thanks for the clarifications.

I don't believe a stressed leaf will not reflect more red than green (after all, a dying plant is still green), but a dead plant will. As you pointed out, the red reflectivity arrow coming off the dead plant should have been slightly longer than the green. I will fix this and upload a corrected image.

I have actually found the opposite regarding shadows and NDVI. Typically the low denominator from a shaded area overcomes the low numerator. Below is a typical example. You'll notice the shadow in the lower-right hand corner is suppressed in the NIR-VIS image (middle), but highlighted in the NDVI image (bottom). This is very reproducible and is one of the reasons I believe NDVI introduces significant artifacts into these images relative to NIR-VIS.

For our nascent industry, I believe a simple, reproducible image will go a long way gaining converts. As we mature, adding calibration and the ability to compare across fields over time under different illumination conditions will add significant value. I look forward to continue working in this area and seeing what our community can do!

Best,
Daniel

RSS

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service