NDVI Result

I recently modified an A2300 by taking out the IR filter and finding a gel filter that blocks red light, and lets IR pass.  I took two pictures, one with the rosco 2007 gel filter, and one without.  NEITHER image has the built-in glass IR filter.

I used this tutorial to do the image manipulation.  I hear there are processing tools out there, I'm going to give them a try, and perhaps create a macro for photoshop to make it quicker.

Visible base

NIR base - with gel filter

I then processed the image in Photoshop using two methods.  First Normalized Differential Vegetative Index (NDVI) in the image at the top, and then NRG (below) where Near-Infrared, Red, and Green are used to compose a picture instead of the usual Red, Green, and Blue.  (Thanks for the great site publiclab!  http://publiclab.org/wiki/ndvi)

NRG result

I'm obviously itching to do this with aerial images, but I wanted to throw this out there, and see where I'm falling short.  I think the NDVI image begins to show me valuable information (where less IR is reflected) but any tips on improving results is greatly appreciated.  Comments on importance of custom white balance are appreciated.

If you have any questions, please don't hesitate to ask!  Thanks!

Views: 3668

Comment by Joshua Ott on May 28, 2013 at 12:06pm

BTW, the slight parallax gives a low budget 3D, just cross your eyes.

Comment by Sam Kelly on May 28, 2013 at 12:56pm

Take me down to the parallax city

Comment by German_ti on May 28, 2013 at 2:59pm

How do u get NDVI without red channel? U will ger NIR\G\B!

But as u know,


Comment by Gerard Toonstra on May 28, 2013 at 3:33pm

maxmax states they're using the blue channel instead. Since most plants look green, the amount of photosynthesis on that color is lowest as you can see in their graph. However, at the same time the NDVI is a relative measure?


Playing around with the levels in Photoshop has the same effect as playing around with sensor resolution and bias. So it's obvious that it follows, if green is absorbed less, you'd get a relatively inaccurate image.

Comment by Joshua Johnson on May 28, 2013 at 4:55pm

It's amazing to see 3DR getting more involved in agricultural applications.  Agricultural is going to be one of the largest civilian drone markets out their! Bravo 3DR! 

Comment by Patrick Coyle on May 28, 2013 at 5:27pm

For great tool to post process NIR and visible images, see:


Ned Horning's tool for Fiji or ImageJ, processes directories of synched pairs. Example from Fort Mason San Francisco community garden KAP session shown below. For more see:


Comment by Gerard Toonstra on May 28, 2013 at 5:56pm

Well, for standard visible light photography, cambridge colour has some very good explanations:


White balance compensates for hues emitted from sources of light which are not true white. What I see in the Rosco filter work is that the attempt is made to make the photos look equal to the photos of the LifePixel camera. What I miss in that discussion is the consideration that the LifePixel Superblue result is the result you'd like to have to produce NDVI images, what makes them the ultimate reference?

Is it correct to make assumptions about what these NIR/GB images should look like? What is important in those images though is the ratio between reflected light and NIR. So I wouldn't do anything much to upset that relationship by trying to make it look like a "good looking photo", which it is not. Doing that would probably impact the resolution of measurements or otherwise bias the null?

There are some specific differences in the filter used by lifepixel and the rosco. This could explain how the white balance needs to be different to account for these filter differences (so that the R channel can actually be correctly compared to B and G channels). The recorded colors are curves, not linear, which means that the response of frequencies in the spectrum covered by those channels needs to be further analyzed.

One strong "indicator" here is that the response of green in the filter is very small. When you shoot anything, it will come up as purple, since green is basically absent. In a lot of these pictures I notice that some of the growth is yellow, which is an indication that green is mixed into the image by shifting the white balance away from this purple?Anyway, I'm new to this stuff too, but I'd love to see this explained mathematically somewhere. At the moment I get the impression people are trying to make visually pleasing images. I think of these images as data, nothing else.

Comment by Ned Horning on May 29, 2013 at 6:42am

Gerard - You raise a lot of interesting points that highlight the blurring of art and science.  In the next week or so I'll try to post a research note on http://publiclab.org that digs deeper into some of the points you raise. In the meantime here are some quick thoughts.

I don't think the LifePixel SuperBlue filter is necessarily the optimal filter target for creating NDVI using a single camera but it works and it's something that I an others know about and the Rosco 2007 filter has similar properties.

The color of the pictures using a SuperBlue or Rosco 2007 filter is largely due to the white balance settings. The yellowish vegetation results when the white balance is set using a gray card. Both the red and green channels of the camera sensor record mostly near-IR light and combining red and green bands will give a yellowish appearance since plants reflect a lot of near-infrared light.

There is no “correct” way to display false color near-infrared image images. The defacto standard is to simulate the colors from film near-infrared photos but there are lots of other options. If the intent is to make a pretty picture then you select the band order (NIR, R, B or whatever you want) and make enhancements to make it more beautiful. There are some stunning infrared imagery out there.

From a more scientific view you can approach it from a qualitative or quantitative perspective. From a qualitative view the intent is usually to improve interpretability of the image which can in some cases result in an ugly looking but easy to interpret image. For agriculture I think this is will be the sweet spot for most people. Farmers will be interested in relative differences in a field to see what is areas need more attention as far as fertilizer or pest control.

For quantitative work the normalized difference vegetation index (NDVI) seems to be the primary workhorse. The peer-reviewed literature is abundant with articles correlating NDVI to all sort of plant growth processes and there is a whole industry being developed to automate how this information can be used in agriculture. This is not an easy task and I think there will be disappointment from people expecting to have an inexpensive, easy to use solution to treating crops in the near future. Some of these simple (mostly DSLR cameras) camera systems have been calibrated and are used to record radiance for known wavelengths.

It's important to know that NDVI doesn't have a single “standard”. You can try to create an NDVI product that is similar to another NDVI product like some of the satellite NDVI products from Landsat and MODIS but that's about it.

So, my take is that people are making visually pleasing images and some are using these for scientific studies and some are doing both. I expect the greatest benefit in the next 10 years or so will come from visually interpreting these images but automated methods are slowly getting better. I also think that the value of normal color imagery is underestimated.

Enough for now...


Comment by Gerard Toonstra on May 29, 2013 at 7:51am

Ned - apologies if anything I said sounded too critical. I had some time to think more about the subject yesterday and indeed.. I don't think there'll never be a pure scientific way to approach this subject unless you start to measure the conditions or get into the specialized equipment. It's a bit of a black art. It's likely that interpretation in the end is best done by looking at how 'correct' the image looks visually.

The interpretative result is good enough for one set of conditions to provide a sense of what's going on. I think however that the more of the results can be tied to some known reference (even if it's the camera internals or some other inexact reference ), the more it would provide a basis for comparisons between sets (history?), regardless of the exact conditions under which the shooting took place. That not only has benefits for comparison, but also helps to deliver predictable results without too much tweaking in post-processing.

Unfortunately I don't have a NIR camera (yet), I'd like very much to experiment with this myself and see where it goes. There are two experiments in workflow I'd like to execute for this super blue filter:

1. Produce an image with white balance @5000K with 0 tint. The B channel now contains an uncompensated amount of reflected visible light, produced by reflections of chlorophyll-a. R now contains recorded infrared. I'm assuming that NIR (R) can be used as is without compensation, but A needs to be compensated separately for the white balance at the time of shooting. It should be easier to do this if the source is a RAW image. Eventually these are the two channels that are used as the basis for NDVI calculation.

2. An alternative is to shoot two separate images from a tripod. One with 5000K 0-tint to get a NIR channel. Another with the correct setting for white balance given the conditions to provide a 'correct' blue channel. I'd very like to see what the result is if these are used together. In the link here: http://publiclab.org/notes/cfastie/04-20-2013/superblue, the Rosco filter shifted the colors too far, upsetting the absolute levels and therefore the balance between recorded NIR and visibile light in relation to one another.

Anyway, enough hypothesis, for now I'm going to see if I can do anything interesting with the photos in this article.

Comment by Ned Horning on May 29, 2013 at 8:28am

Gerard - No worries about sounding too critical. This critical thought is important. I'd also like to better characterize these simple NIR cameras. There are protocols to do that but I need the time and connection to a suitably equipped lab. Your intent to use RAW formatted images is good. There is a general aversion to using RAW images since it adds complexity to processing/viewing, recording times are slower and they take a lot more space but it's the only way to get reliable numbers. 

You (and anyone else reading this) should consider joining the discussions at http://publiclab.org through research notes and email lists. If you do some testing the folks there would be thrilled to hear about your work.


You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service