Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture

Thanks to all of your for your great feedback on my last blog post on airframe selection. There has been a lot of interest and misinformation about NDVI images on here recently, so I thought the topic deserved its own post. Briefly, I explained a little bit of history, why NDVI may not be the best index for UAVs, and showed some ground-truthing we did with NDVI maps. It ended up being a little too long to summarize here, but check it out at http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collec...

I would encourage anyone interested in the topic to check out some of Compton Tucker's original papers on the subject as they are really illuminating as to how the index came about. There is a lot of home research on Public Labs dedicated to making NDVI work with different systems, but NDVI was never a golden standard and different equipment may require different image processing metrics.

Views: 8444

Replies to This Discussion

Thanks Daniel, that all makes sense.

DroneCon sounds great.  Unfortunately, I'm currently 14,212 km from Boulder, Colorado.

I love the hot stove idea.  Great application of physics in household appliances!

Wow - 80% is huge. I knew there was some contamination of NIR in the blue channel, but assumed it was small.  It seems then, that using the BG3 and and calculating NIR-VIS using the blue channel for VIS, you're actually getting something like:

NIR - (Blue+0.8NIR)

= 0.2NIR - Blue

i.e. not good.

Hence your idea to use a different filter, and green for VIS, where less NIR is present.

(Let me know if my reasoning is wrong here.)

The only thing that I didnt understand was when you said:

"...if you play with my script, you'll see there is really no difference in between using the blue or the green as the visible channel, so it probably doesn't matter which one you choose."

Based on what we've talked about, surely there should be a difference, and green should be better?

I'm also curious about what you and others think about the various options for creating Vegetation Index imagery. There's ImageJ of course, but is anyone using Pix4D or AgPixel? My trial for AgPixel expired before I received my BG3 filter so I was unable to test it with any of my own images. Pix4D's latest version appears to have good options for Vegetation Indices, but it's not in everyone's budget :-)




P.S. I just thought of this:

Since the Blue channel absorbs a lot of NIR, you could install a Blue-cut filter, and use the Blue channel for NIR, and Green for VIS.

I expect it would give similar results to the method we were talking about in the previous posts: namely a Red-cut filter, allowing you to use the Red channel for NIR, and Green for VIS.

I wonder if this is being done, and what would be the pros and cons of these two approaches?

I used ImageJ

The same photo with AgPixel, nice program but too expensive


Hi Richard,

Your reasoning is spot on. So much NIR light leaks through the blue channel that theoretically using the green should deliver better results. However, I just don't see that practically. Generating a DVI image using the blue yields an almost identical result to using the green. I believe its because there is a much greater difference between the NIR and either the blue or the green than the blue and the green.

We use Fiji for the calculations. I don't think it makes much sense to buy Pix4D unless you also need it for the stitching. I've never used AgPixel.



Hi Richard,

This is the other common way to collect NIR images and there are a few posts on Public Lab about using this technique. You end up with NIR, green, and NIR+red channels. Before I figured out that DVI images were groundtruthing very well (I will post a bunch more examples in the next week or so), I was a little frustrated with NDVI and was planning on trying this out. Imaging is going very well now, so I'm sticking with our current setup, but I want to play with this this winter when things slow down to compare the two. I suspect they are equivalent, but it would be nice to know for sure.


Hi all - In case anyone is still following this I wanted to let you know that I updated the Photo monitoring plugin and guide for ImageJ/Fiji to add DVI and NDVI in a drop-down menu. I posted a research note on the Public Labs site about that with a couple examples of DVI and NDVI comparisons. My hope is that this will make DVI and NDVI comparisons easy to do.

Excellent work, thank you Ned for these great tools.

Very interesting discussion, am I too late to participate?  My back ground is in remote sensing - lots of hyperspectral data processing so I admit my bias is for lots of narrow bands. :-) 

I think I can help address some issues here. You mention that you don't see a lot of difference in using either the blue or green channel to make a DVI.  I think the main thing that contributes to that is the fact that bands are so broad in a camera or GoPro - the blue band also detects a lot of green light and vice versa.  Look at the spectral plots of various cameras - maxmax has several plots. The bands overlap severely.  I'd wager you would see more difference in DVIs if your blue and green channels were much narrower - say 20 nm or so - like they are on a multispectral sensor. That's why I prefer narrow bands - they allow you to focus on specific spectral features. 

There has also been some discussion about getting anomalous NDVI or DVI values in shadows. I agree with Ned in that there is no reason why either NDVI or DVI show give you more anomalies. John Stuart in his post on 6/12/14 showed a set of images where the NDVI showed some vegetation signal in a few shadows.  If you look at his NIR image you do see some red in this area so I would suspect there is indeed veg there - possibly weeds sucking up water between the crop rows.

OK, so why are there anomalous NDVI or DVI values in some shadows? It's because healthy plants with a good leaf structure reflect NIR light so efficiently - more so than any other wavelength.  It's the leaf structure that reflects the NIR. This is to help the plant stay cool. So where does that scattered NIR light go? It goes everywhere!  Including the shadows. So if we follow the path of some NIR photons, they would scatter from a nice healthy plant, into shadows, maybe some absorbed but others  reflecting back up to the sensor.  The effect is that you are contributing to the NIR apparent reflectance from the shadows.  And since there isn't much signal coming from shadows anyway, it doesn't take much to get an increased NIR signal from the shadows and that would account for your anomalously high NIR-vis signal in dark areas.  

I learned all this in class but was really struck by this when I was working with some low altitude hyperspectral data where there were some bushes and grass next to a road. I converted the data to reflectance and then extracted a reflectance spectrum from the center of the road. I was amazed by how much that road reflectance spectrum looked like vegetation reflectance! There was a strong NIR signal. I was 4 or 5 pixels away from any veg but the scattered NIR light from the veg contributed strongly to the signal of the dark road. Same would happen with shadows as there is not much signal coming from them either.  

For accurate work you should convert your images to reflectance so you don't have variations due to sensor sensitivity differences, scattered light, solar irradiance curve, etc. That's not to say you can't get a good DVI product from a camera to GoPro. You can, it's just that you can't really compare one collection to another unless you go to reflectance and normalize the vegetation index to account for illumination variations. That's why the Normalized Difference Vegetation Index (NDVI) is so widely used. 




© 2020   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service