3D Robotics

Thanks to all of your for your great feedback on my last blog post on airframe selection. There has been a lot of interest and misinformation about NDVI images on here recently, so I thought the topic deserved its own post. Briefly, I explained a little bit of history, why NDVI may not be the best index for UAVs, and showed some ground-truthing we did with NDVI maps. It ended up being a little too long to summarize here, but check it out at http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collected-ndvi-imagery-and-the-agribotix-experience-in-ground-truthing-these-images-for-agriculture

I would encourage anyone interested in the topic to check out some of Compton Tucker's original papers on the subject as they are really illuminating as to how the index came about. There is a lot of home research on Public Labs dedicated to making NDVI work with different systems, but NDVI was never a golden standard and different equipment may require different image processing metrics.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Canon sx260 + Event38

    Agribotix script and LUT

    Mais 

    mais1.jpg

    • Hi Antonio, 

      If I may ask what image processing system are you using? Fiji (ImageJ)?

      Thanks,

      Jim

    • I used ImageJ

      The same photo with AgPixel, nice program but too expensive

      mais1.Enhanced3.jpg

  • Hi Daniel,

    I enjoyed your well-written article.

    I have been playing with the BG3 filter.  You mentioned that the new Event 38 custom filter will give a better Green channel, and I can certainly see this by looking at the published transmission curve.

    However, I saw that the custom filter's transmission curve shoots up at about 670nm, reaching almost maximum at 700nm.  This compares to the BG3 which doesn't reach maximum until about 740nm.

    See the attached vegetation spectral curve that I found on the internet.  Assuming this is correct, then you really want the transmission curve to go up not before 710nm.  Otherwise, your NIR channel will be collecting light to the left of where the curves for healthy and unhealthy vegetation cross each other.  In this region, UNHEALTHY vegetation will be contibuting to the NIR channel MORE than HEALTHY vegetation, which is the opposite of you want. This will reduce the effect of everything to the right of that crossover point.

    Going back to the transmission curves mentioned at the top of my post, it would seem that the BG3 is actually better in this regard.  I guess that what's going on to the left of that crossover point is minor compared to the huge wide band of NIR wavelengths above say 710nm, so the BG3 is only *slightly* better in the NIR channel, and it's no big deal.

    I guess what I am trying to ask is:  Why is the new custom filter from Event 38 better than the BG3?  Yes, it can see more Green, so you can use Green as the visible channel instead of Blue.  I'm guessing that is the key here.  So the question then becomes:

    Why is it better to use Green than Blue for the visible channel?

    Thanks for this interesting discussion!

    Richard (in Australia)

    vegetation-spectral-curve.jpg

    https://storage.ning.com/topology/rest/1.0/file/get/3701762069?profile=original
    • 3D Robotics

      Hi Richard,

      For a camera equipped with the Event38 filter, I believe green is a better choice than blue because the blue channel picks up quite a bit of NIR signal. Based on my fairly unscientific experiment of imaging a hot stove that was not emitting in the visible and quantifying pixel intensity, I saw that the blue channel picked up about 80% of the NIR of the red channel and the green channel only picked up a small fraction. I don't have the exact numbers handy, but you should do the experiment yourself if you're curious.

      That said, for both the Schott and Event38 filters, the channels are really blue + NIR, green, and NIR. A more or less intact green channel seemed valuable for calculating vegetation indices, but, if you play with my script, you'll see there is really no difference in between using the blue or the green as the visible channel, so it probably doesn't matter which one you choose.

      I haven't compared the two filters in detail, but I have done significant ground truthing with images taken using the Event38 filter and the NIR-VIS index and can say with confidence that we are getting solid results. I will show some of them off at DroneCon, if you'll be there, but if not I'll write up a blog post in the next week or so. I think you would likely get good results using the Schott as well.

      I would also be careful with plots like the one you attached. There are a lot of reflectance spectra out there that try to present the information too generally. What is a healthy plant? What is an unhealthy plant? The sensitivity of the detectors in different cameras are also not widely published and I think the solution to developing a reliable vegetation index is just going out and collecting a bunch of images and seeing which ones correlate to what's going on on the ground.

      Good luck with your imaging and let me know if I can answer any more questions!

      Best,

      Daniel

    • P.S. I just thought of this:

      Since the Blue channel absorbs a lot of NIR, you could install a Blue-cut filter, and use the Blue channel for NIR, and Green for VIS.

      I expect it would give similar results to the method we were talking about in the previous posts: namely a Red-cut filter, allowing you to use the Red channel for NIR, and Green for VIS.

      I wonder if this is being done, and what would be the pros and cons of these two approaches?

    • 3D Robotics

      Hi Richard,

      This is the other common way to collect NIR images and there are a few posts on Public Lab about using this technique. You end up with NIR, green, and NIR+red channels. Before I figured out that DVI images were groundtruthing very well (I will post a bunch more examples in the next week or so), I was a little frustrated with NDVI and was planning on trying this out. Imaging is going very well now, so I'm sticking with our current setup, but I want to play with this this winter when things slow down to compare the two. I suspect they are equivalent, but it would be nice to know for sure.

      Best,
      Daniel

    • Very interesting discussion, am I too late to participate?  My back ground is in remote sensing - lots of hyperspectral data processing so I admit my bias is for lots of narrow bands. :-) 

      I think I can help address some issues here. You mention that you don't see a lot of difference in using either the blue or green channel to make a DVI.  I think the main thing that contributes to that is the fact that bands are so broad in a camera or GoPro - the blue band also detects a lot of green light and vice versa.  Look at the spectral plots of various cameras - maxmax has several plots. The bands overlap severely.  I'd wager you would see more difference in DVIs if your blue and green channels were much narrower - say 20 nm or so - like they are on a multispectral sensor. That's why I prefer narrow bands - they allow you to focus on specific spectral features. 

      There has also been some discussion about getting anomalous NDVI or DVI values in shadows. I agree with Ned in that there is no reason why either NDVI or DVI show give you more anomalies. John Stuart in his post on 6/12/14 showed a set of images where the NDVI showed some vegetation signal in a few shadows.  If you look at his NIR image you do see some red in this area so I would suspect there is indeed veg there - possibly weeds sucking up water between the crop rows.

      OK, so why are there anomalous NDVI or DVI values in some shadows? It's because healthy plants with a good leaf structure reflect NIR light so efficiently - more so than any other wavelength.  It's the leaf structure that reflects the NIR. This is to help the plant stay cool. So where does that scattered NIR light go? It goes everywhere!  Including the shadows. So if we follow the path of some NIR photons, they would scatter from a nice healthy plant, into shadows, maybe some absorbed but others  reflecting back up to the sensor.  The effect is that you are contributing to the NIR apparent reflectance from the shadows.  And since there isn't much signal coming from shadows anyway, it doesn't take much to get an increased NIR signal from the shadows and that would account for your anomalously high NIR-vis signal in dark areas.  

      I learned all this in class but was really struck by this when I was working with some low altitude hyperspectral data where there were some bushes and grass next to a road. I converted the data to reflectance and then extracted a reflectance spectrum from the center of the road. I was amazed by how much that road reflectance spectrum looked like vegetation reflectance! There was a strong NIR signal. I was 4 or 5 pixels away from any veg but the scattered NIR light from the veg contributed strongly to the signal of the dark road. Same would happen with shadows as there is not much signal coming from them either.  

      For accurate work you should convert your images to reflectance so you don't have variations due to sensor sensitivity differences, scattered light, solar irradiance curve, etc. That's not to say you can't get a good DVI product from a camera to GoPro. You can, it's just that you can't really compare one collection to another unless you go to reflectance and normalize the vegetation index to account for illumination variations. That's why the Normalized Difference Vegetation Index (NDVI) is so widely used. 

      Cheers,

      Joe

    • Thanks Daniel, that all makes sense.

      DroneCon sounds great.  Unfortunately, I'm currently 14,212 km from Boulder, Colorado.

      I love the hot stove idea.  Great application of physics in household appliances!

      Wow - 80% is huge. I knew there was some contamination of NIR in the blue channel, but assumed it was small.  It seems then, that using the BG3 and and calculating NIR-VIS using the blue channel for VIS, you're actually getting something like:

      NIR - (Blue+0.8NIR)

      = 0.2NIR - Blue

      i.e. not good.

      Hence your idea to use a different filter, and green for VIS, where less NIR is present.

      (Let me know if my reasoning is wrong here.)

      The only thing that I didnt understand was when you said:

      "...if you play with my script, you'll see there is really no difference in between using the blue or the green as the visible channel, so it probably doesn't matter which one you choose."

      Based on what we've talked about, surely there should be a difference, and green should be better?

      I'm also curious about what you and others think about the various options for creating Vegetation Index imagery. There's ImageJ of course, but is anyone using Pix4D or AgPixel? My trial for AgPixel expired before I received my BG3 filter so I was unable to test it with any of my own images. Pix4D's latest version appears to have good options for Vegetation Indices, but it's not in everyone's budget :-)

      Cheers,

      Richard

      (Australia)

    • 3D Robotics

      Hi Richard,

      Your reasoning is spot on. So much NIR light leaks through the blue channel that theoretically using the green should deliver better results. However, I just don't see that practically. Generating a DVI image using the blue yields an almost identical result to using the green. I believe its because there is a much greater difference between the NIR and either the blue or the green than the blue and the green.

      We use Fiji for the calculations. I don't think it makes much sense to buy Pix4D unless you also need it for the stitching. I've never used AgPixel.

      Best,

      Daniel

This reply was deleted.