Misconceptions about UAV-collected NDVI imagery and the Agribotix experience in ground truthing these images for agriculture

Thanks to all of your for your great feedback on my last blog post on airframe selection. There has been a lot of interest and misinformation about NDVI images on here recently, so I thought the topic deserved its own post. Briefly, I explained a little bit of history, why NDVI may not be the best index for UAVs, and showed some ground-truthing we did with NDVI maps. It ended up being a little too long to summarize here, but check it out at http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collec...

I would encourage anyone interested in the topic to check out some of Compton Tucker's original papers on the subject as they are really illuminating as to how the index came about. There is a lot of home research on Public Labs dedicated to making NDVI work with different systems, but NDVI was never a golden standard and different equipment may require different image processing metrics.

Views: 8367

Replies to This Discussion

This is a great post, thank you! Would love to see your script as well.


Thanks! You made my day.


Daniel, You have stimulated some good discussion. Just for clarity, shadows will impact both NDVI and the subtraction methods. In some of my calibration tests I've been getting reasonable NDVI values in light shadow. In some cases it makes sense to mask shadow pixels before processing. I think additional tests with different approaches will be useful.

I created an NDVI image using your false color image and this is what I got:The result isn't all that bad and is arguably as good at the subtraction image. I post this not to argue that one approach is better than another but to illustrate that a lot has to do with the way the image is processed. I calculated NDVI without stretching your bands and then did a linear stretch on the NDVI image so pixel values were within a sensible NDVI range. Unless we are working with some sort of calibration or other standardization this will be a very subjective process.

I hope this discussion continues. 


Excellent. Looking forward to those scripts!

Daniel - I have some plugins in a Github repository (https://github.com/nedhorning/PhotoMonitoringPlugin) and you're welcome to modify or add to that collection. If you're interested in using the same sort of single or multi-image processing chain let me know offline and I'm happy to work with you to get the NIR-VIS option up and running. 

Hi Ned,

Thanks for sharing. I am in the midst making my simple script easier to use and will post that as soon as I'm done. It looks like yours is significantly more powerful. Did you ever have issues matching two cameras? We originally thought two cameras would be important to get a dedicated red and NIR channel, but the French guys who post here every now and then said they didn't get any better results and the image matching process introduced some errors into their processing. Did you have this experience?

Sorry for the stupid question, but I'm having trouble getting your plugin to open in Fiji and I am getting an error trying to get the .pdf instructions to open. I've never used GutHub before. Is there a special way to download the files? The specific errors are that the .pdf file is damages and that Fiji can't unzip two files when I try to install the plugin. I'm sure there's just some way to download properly from GitHub, but I don't know how.

Finally, how did you make the NDVI image above? I tried reproducing that both stretching and not stretching the images, using or not using floating point arithmetic, and using both the green and the blue channel at the visible, but every combination leads to a bright shadow in the corner. Would you mind posting specific instructions for how to make that image?




I have had good luck with dual camera systems. The image matching is quite fast and surprising good. They have the advantage of clean red and clean NIR bands which is helpful. The drawback is the matching system. I've had very good luck matching image pairs. The main issue is with parallax. Those effects can be reduced by flying higher and keeping the camera lenses as close as practical. Another drawback is the extra weight and having to synchronize the shutters so they fire at the same time otherwise it increases parallax effects.

Thanks for letting me know about the GitHub problem. I wonder how long it's been like that. I'm not sure what's going on with the site. It's possible that the problem is on the GitHub site but it's more probable that I'm doing something stupid. I don't have time to look into it now but I'll see if I can get it to work tomorrow. You can also access an older version of the plugin at Brenden's FlightRiot site: http://flightriot.com/post-processing-software/fiji-imagej/. Click on  "Click Here to Download FIJI/IMAGEJ with PHOTO MONITORING PLUGIN pre-configured"

I created the NDVI image using my plugin but get the same result using these steps in Fiji:

Image => Color => Split Channels

Process +. Image Expression Parser using this expression: (A-B)/(A+B) and I specify A to be the red band and B to be the blue band

The next step I don't remember the details but I think I scaled the values from -0.1 to 0.25 to a range of 0 to 255 clipping resulting values less than 0 to 0 and values greater than 255 to 255 and then converted to an integer image. I did that last step in R since I'm not sure how to do it easily in Fiji. Without scaling the NDVI image you can still see the same patterns but the color tables look awful and the NDVI values are quite low since the blue band is a bit brighter than it should be relative to the NIR (red).

I should preface posting this script with the disclosure that I am certainly not a talented programmer the way Ned Horning is and that Fiji or ImageJ are fairly clunky programs. Once we settle on a robust processing routine, we will port all of these over to Python to cut ImageJ out.

Also, adding a better interface really slowed down these scripts, so, if you'd like them to run faster, I suggest just cutting the appropriate piece out and running it on its own. If anyone has any other suggestions for speeding this script, I would love to hear them as well.

To use the script, simply open the .ijm file in ImageJ or Fiji and click "Run". I believe the dialogue boxes are fairly clear, but the script will prompt you first whether you'd like to use NIR-VIS or NDVI, then ask what lookup table you'd like to use (on a Mac you have to copy the LUT from the Fiji package to somewhere you can select it), then ask you for the directory where your images are located, and finally ask whether you'd like to use the blue or the green channel to represent the visible.

This script should represent a really easy way to compare NIR-VIS with NDVI and compare the effect of using the blue channel with the green channel. Please let me know if you have any questions or suggestions for improvement.


Giving this a shot now with some images. If I can bother you some more ... do you have a preferred LUT?  

Interesting discussion and exercise, thank's.I try to avoid the shadow and this was my best option

Hi Ned,

Great to know about the dual camera systems. We may have to experiment with that over the winter when life slows down a little bit. Did you get appreciably better results? It seems like there is little difference in either the NIR-VIS or the NDVI signal when switching between using the green and blue channel to stand in for the visible.

I'm pretty positive the reason your image lacks the high NDVI signal from the shadow is because of your clipping step. I would have to see exactly what image arithmetic you used to totally reproduce your result, but it's not surprising that clipping values less than 0 and values greater than 255 on some scale would eliminate very bright or dim areas. You can see your image also lacks the bright areas on the left side. I'm not sure throwing out these data is the right decision though. Is there any reason you decided to clip these values? Here it delivers a better image, but I don't think that's necessarily the case.


Hi James,

I've found the attached LUT really helps to clarify density of vegetation with green to red versus non-vegetation with gray using my script. Let me know what you think and if everything works.





Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service