Photosynthesis Assessments

3689526841?profile=original

NDVI Result

I recently modified an A2300 by taking out the IR filter and finding a gel filter that blocks red light, and lets IR pass.  I took two pictures, one with the rosco 2007 gel filter, and one without.  NEITHER image has the built-in glass IR filter.

I used this tutorial to do the image manipulation.  I hear there are processing tools out there, I'm going to give them a try, and perhaps create a macro for photoshop to make it quicker.

Visible base

3689526797?profile=original

NIR base - with gel filter

3689526883?profile=original

I then processed the image in Photoshop using two methods.  First Normalized Differential Vegetative Index (NDVI) in the image at the top, and then NRG (below) where Near-Infrared, Red, and Green are used to compose a picture instead of the usual Red, Green, and Blue.  (Thanks for the great site publiclab!  http://publiclab.org/wiki/ndvi)

NRG result

3689526916?profile=original

I'm obviously itching to do this with aerial images, but I wanted to throw this out there, and see where I'm falling short.  I think the NDVI image begins to show me valuable information (where less IR is reflected) but any tips on improving results is greatly appreciated.  Comments on importance of custom white balance are appreciated.

If you have any questions, please don't hesitate to ask!  Thanks!

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • This topic is fascinating and great to see so much discussion about it.  Brenden over at Flight Riot has been doing a lot of testing on this subject himself.  I actually found out about the Publiclabs project from his site.   There seem to be a lot of factors that go into getting usable results. 

    As mentioned numerous times, the white balance seems to play a big part.  Geo discusses this in a few articles.  Another thing he's come across is the difference in how CMOS and CCD sensors produce NDVI results.  There seems to be a big difference between the two.  Check out the links to his various articles, there are lots of pictures that really highlight the differences between the variables.

    Open Source Single Camera NDVI – Vegetation Health Mapping

    NDVI Tests: Canon SX230 and A490 with Rosco #2007 Filter

    3 Camera Comparison with Rosco #2007 Filter

    Filter Tests for Single Camera NDVI

    NDVI Testing Continued – sx230 CMOS Sensor and Lighting

    NDVI – White Balance Observations with CMOS Sensors (Canon SX230)

    NDVI Tests – 3 Cameras, 10 Filters

  • I also realized that a grey card isn't going to cut it for this filter. The G band is basically suppressed, so it'll come out wrong in the end. The "green foliage" trick probably works well, because you're getting lots of sun reflection with lots of IR reflection. The amount of IR in sunlight versus visible light is probably not the same.

    I think a "green card" of a specific green hue, made of a specific material with a known IR reflection value would do the trick?  (or should that be the opposite of green?).

  • If you want to try to find the best white balance target you could browse the USGS spectral library ( http://speclab.cr.usgs.gov/spectral.lib06/). I think LifePixel uses a gray card to calibrate their SuperBlue cameras. That's probably not perfect but it's not bad. It's what I use although sometimes I use green grass when using a near-IR pass filter that doesn't allow visible light through. For many applications the important factor is consistency so as long as everyone is using the same white balance that will suffice.

    You're right about near-IR itself not being a good indicator of plant health. It's important to have a visible band (preferably red) since plants are pretty unique in that they absorb most of the red light and reflect most of the near-IR light. Your statement “the amount of reflected visible light equals the amount of reflected IR, then the NDVI is 0” is also accurate but the problem is that with an uncalibrated camera we really don't know how the pixel values we get are related to amounts of reflected light. What we really want to measure is radiance at the sensor and we want to know the wave band that we're measuring. That's my goal if I get around to calibrating some cameras like I mentioned in my previous post.

  • Ok, I'm subscribed, although it'll be a while before I can contribute anything meaningful.

    I couldn't do anything with the photos from the site or this post, since I didn't know the whitebalance. I did have a look at the individual channels and noticed some things:

    - The blue channel loses a lot of its articulation/contrast. If you look at one picture of the lake, the 2-cam NRG has a lot more contrast in the patches. The same thing happens on this picture: http://i.publiclab.org/system/images/photos/000/000/093/medium/WBal...

    - The red channel also seems to be 'averaged out' a lot due to the white balance compensation.

    It's also easy to mix up concepts. Infrared by itself is not a measure of plant health. Lots of materials reflect IR, so you'd expect something to be in that channel for many materials. If the amount of reflected visible light equals the amount of reflected IR, then the NDVI is 0. If the amount of reflected IR is half that or less from VIS, the NDVI is -1. If you look at some NDVI pictures, you see this happening with water, but in general most lifeless objects are around 0.

    Which brings the conversation back to the white balance calibration again. For a camera that records B,G and IR, it should be a grey card for the B/G channels, but also a specific type of material that reflects the same amount of IR as the visible light reflects, so that this card can be used for calibration. Such a specialized card should make it possible to use the standard custom WB option in the camera (or otherwise recalibrate the pictures back on the PC?). If not looking at the material, but relying solely on color, then the colors in G and B need to be somewhat differently to offset for the difference in IR reflection? (compensating it?).

    If this card is impossible to make, there may be alternative workflows using materials that fully absorb or reflect IR. Perspex and aluminium come to mind. You could even shoot stuff through perspex to confirm that the R channel is indeed picking up IR and not color information (or determine the "amount" of noise which is color).

    Does that make sense?

  • Gerard - No worries about sounding too critical. This critical thought is important. I'd also like to better characterize these simple NIR cameras. There are protocols to do that but I need the time and connection to a suitably equipped lab. Your intent to use RAW formatted images is good. There is a general aversion to using RAW images since it adds complexity to processing/viewing, recording times are slower and they take a lot more space but it's the only way to get reliable numbers. 

    You (and anyone else reading this) should consider joining the discussions at http://publiclab.org through research notes and email lists. If you do some testing the folks there would be thrilled to hear about your work.

  • Ned - apologies if anything I said sounded too critical. I had some time to think more about the subject yesterday and indeed.. I don't think there'll never be a pure scientific way to approach this subject unless you start to measure the conditions or get into the specialized equipment. It's a bit of a black art. It's likely that interpretation in the end is best done by looking at how 'correct' the image looks visually.

    The interpretative result is good enough for one set of conditions to provide a sense of what's going on. I think however that the more of the results can be tied to some known reference (even if it's the camera internals or some other inexact reference ), the more it would provide a basis for comparisons between sets (history?), regardless of the exact conditions under which the shooting took place. That not only has benefits for comparison, but also helps to deliver predictable results without too much tweaking in post-processing.

    Unfortunately I don't have a NIR camera (yet), I'd like very much to experiment with this myself and see where it goes. There are two experiments in workflow I'd like to execute for this super blue filter:

    1. Produce an image with white balance @5000K with 0 tint. The B channel now contains an uncompensated amount of reflected visible light, produced by reflections of chlorophyll-a. R now contains recorded infrared. I'm assuming that NIR (R) can be used as is without compensation, but A needs to be compensated separately for the white balance at the time of shooting. It should be easier to do this if the source is a RAW image. Eventually these are the two channels that are used as the basis for NDVI calculation.

    2. An alternative is to shoot two separate images from a tripod. One with 5000K 0-tint to get a NIR channel. Another with the correct setting for white balance given the conditions to provide a 'correct' blue channel. I'd very like to see what the result is if these are used together. In the link here: http://publiclab.org/notes/cfastie/04-20-2013/superblue, the Rosco filter shifted the colors too far, upsetting the absolute levels and therefore the balance between recorded NIR and visibile light in relation to one another.

    Anyway, enough hypothesis, for now I'm going to see if I can do anything interesting with the photos in this article.

  • Gerard - You raise a lot of interesting points that highlight the blurring of art and science.  In the next week or so I'll try to post a research note on http://publiclab.org that digs deeper into some of the points you raise. In the meantime here are some quick thoughts.

    I don't think the LifePixel SuperBlue filter is necessarily the optimal filter target for creating NDVI using a single camera but it works and it's something that I an others know about and the Rosco 2007 filter has similar properties.

    The color of the pictures using a SuperBlue or Rosco 2007 filter is largely due to the white balance settings. The yellowish vegetation results when the white balance is set using a gray card. Both the red and green channels of the camera sensor record mostly near-IR light and combining red and green bands will give a yellowish appearance since plants reflect a lot of near-infrared light.

    There is no “correct” way to display false color near-infrared image images. The defacto standard is to simulate the colors from film near-infrared photos but there are lots of other options. If the intent is to make a pretty picture then you select the band order (NIR, R, B or whatever you want) and make enhancements to make it more beautiful. There are some stunning infrared imagery out there.

    From a more scientific view you can approach it from a qualitative or quantitative perspective. From a qualitative view the intent is usually to improve interpretability of the image which can in some cases result in an ugly looking but easy to interpret image. For agriculture I think this is will be the sweet spot for most people. Farmers will be interested in relative differences in a field to see what is areas need more attention as far as fertilizer or pest control.

    For quantitative work the normalized difference vegetation index (NDVI) seems to be the primary workhorse. The peer-reviewed literature is abundant with articles correlating NDVI to all sort of plant growth processes and there is a whole industry being developed to automate how this information can be used in agriculture. This is not an easy task and I think there will be disappointment from people expecting to have an inexpensive, easy to use solution to treating crops in the near future. Some of these simple (mostly DSLR cameras) camera systems have been calibrated and are used to record radiance for known wavelengths.

    It's important to know that NDVI doesn't have a single “standard”. You can try to create an NDVI product that is similar to another NDVI product like some of the satellite NDVI products from Landsat and MODIS but that's about it.

    So, my take is that people are making visually pleasing images and some are using these for scientific studies and some are doing both. I expect the greatest benefit in the next 10 years or so will come from visually interpreting these images but automated methods are slowly getting better. I also think that the value of normal color imagery is underestimated.

    Enough for now...

    Ned

  • Well, for standard visible light photography, cambridge colour has some very good explanations:

    http://www.cambridgeincolour.com/tutorials/white-balance.htm

    White balance compensates for hues emitted from sources of light which are not true white. What I see in the Rosco filter work is that the attempt is made to make the photos look equal to the photos of the LifePixel camera. What I miss in that discussion is the consideration that the LifePixel Superblue result is the result you'd like to have to produce NDVI images, what makes them the ultimate reference?

    Is it correct to make assumptions about what these NIR/GB images should look like? What is important in those images though is the ratio between reflected light and NIR. So I wouldn't do anything much to upset that relationship by trying to make it look like a "good looking photo", which it is not. Doing that would probably impact the resolution of measurements or otherwise bias the null?

    There are some specific differences in the filter used by lifepixel and the rosco. This could explain how the white balance needs to be different to account for these filter differences (so that the R channel can actually be correctly compared to B and G channels). The recorded colors are curves, not linear, which means that the response of frequencies in the spectrum covered by those channels needs to be further analyzed.

    One strong "indicator" here is that the response of green in the filter is very small. When you shoot anything, it will come up as purple, since green is basically absent. In a lot of these pictures I notice that some of the growth is yellow, which is an indication that green is mixed into the image by shifting the white balance away from this purple?Anyway, I'm new to this stuff too, but I'd love to see this explained mathematically somewhere. At the moment I get the impression people are trying to make visually pleasing images. I think of these images as data, nothing else.

  • For great tool to post process NIR and visible images, see:

    https://github.com/nedhorning/PhotoMonitoringPlugin

    Ned Horning's tool for Fiji or ImageJ, processes directories of synched pairs. Example from Fort Mason San Francisco community garden KAP session shown below. For more see:

    http://publiclaboratory.org/notes/patcoyle/5-6-2013/public-lab-norc...

    3692728898?profile=original3692729280?profile=original3692729188?profile=original

    nedhorning/PhotoMonitoringPlugin
    Photo monitoring plugin for ImageJ/Fiji. Contribute to nedhorning/PhotoMonitoringPlugin development by creating an account on GitHub.
  • It's amazing to see 3DR getting more involved in agricultural applications.  Agricultural is going to be one of the largest civilian drone markets out their! Bravo 3DR! 

This reply was deleted.