Visible (RGB) and Full Spectrum (RGB+NIR) Imagery - Geo-referenced NDVI Generation and Remote Sensing by UAV

Thanks to Pteryx for this great data set! In order to generate a geo-referenced NDVI / EVI / EVI2 Vegetation Index we need to fly the area of interest (AOI) with a visible (RGB) and full spectrum (RGB+NIR) camera. Once the RGB and RGB+NIR images are processed inside DroneMapper we have two orthomosaic results from which we can generate a pure NIR ortho. To do this, we use GDAL and the following command: 

/usr/local/bin/gdal_calc.py -A VIS.tif -B NIRVIS.tif --outfile=nir.tif --calc="(A - B)"

Now that we have created our pure NIR orthomosaic we can use this to generate NDVI or other calculations using OTB in an automated fashion.
 

/usr/local/bin/otbcli_BandMath -il NIR.tif VIS.tif  -out ndvi.tif -exp "ndvi(im2b1, im1b1)"

In order to process the orthomosaic tif files, they need to be the exact same size and pixel resolution. OTB also has many other useful commands for remote sensing work. The original flight and area of interest is 1.0 km sq @ 10 cm GSD. 

Thanks -- JP @ DroneMapper

Views: 9048

Comment by Deon van der Merwe on February 10, 2013 at 4:48pm

Nicely done! Dronemapper does a great job on the image processing.

A simpler, arguably comparable method to create NDVI data is to use an "NDVI" converted camera from MaxMax.com, and calculate a blue NDVI. The converted camera senses blue, green, and NIR. The blue is substituted for red in the NDVI calculation because blue light absorption is also correlated with photosynthesis. The low altitude of a small UAV mitigates atmospheric interferences in the blue band. Having all the bands needed for an NDVI produced in a single, affordable camera greatly simplifies processing into an NDVI data layer. BTW, I have no affiliation with MaxMax.com, I just wanted to show the availability of an alternative to the traditional RGB+NIR imagery for vegetation analysis. It becomes a practical alternative when we use low-flying UAVs. Simplicity and efficiency = good for agricultural applications.

Below is a sample of a false color blue NDVI data layer created using a single camera. Blue indicates low NDVI (bare soil in this example); yellow to red indicates higher NDVI  (wheat in this example):

Comment by JP on February 10, 2013 at 4:51pm

Thanks for the great comment! You are exactly right, the same data can be obtained with a single camera. The only draw back is you don't get a visible (RGB) image with the results. Thanks!

Comment by Deon van der Merwe on February 10, 2013 at 5:10pm

I hope someone will figure out how to make an affordable, light weight, true 4-band camera. It will have soooo many applications!

Comment by JP on February 10, 2013 at 5:16pm

me too! i think maybe tetra cam is closest possibility?

Comment by LanMark on February 10, 2013 at 9:05pm

@FlyingMerf.. I would love to get into doing this sort of analysis out in my area.. is there any good sites that show how to properly do vegetation stress analysis.. I like the modified camera approach as you probably would have a smaller payload then other methods.

Comment by Deon van der Merwe on February 11, 2013 at 5:30am
Mark, I am not aware of any sites that will give you all the information, and as you can imagine, there are many approaches to this type of analysis. We started a graduate course in sUAS Agricultural Applications this year at Kansas State University to help graduate students to be more successful when using sUAS-based imaging in their research. We expect this field to expand dramatically when the FAA opens up the regs for private sUAS use in agriculture, and we are starting to develop applications that can be implemented by producers when the time comes. I'd be happy to share more details with you. I sent you a friend request.

T3
Comment by Rory Paul on February 11, 2013 at 7:37am

JP

Have you got any data with regards to the accuracy of the pixel match between the layers? The reason that I ask this is that in my opinion the quantitative value of a two camera method is questionable as long as there is no guarantee that the you are working with data accurately supper imposed.   

Comment by JP on February 11, 2013 at 7:43am

Hi, the pixels between images are the same size. ~10 cm GSD

Each orthomosaic (VIS and NIRVIS) scene is the same size and GSD. 10772 x 12442 pixels.

The flight was completed with two different cameras and the geo-referenced orthomosaics were generated from the same flight. We provided a shapefile of the area of interest to our processing system and it clips the results to that AOI. This ensures you have the correct sized tifs to complete the BandMath.

Tools such as OTB won't let you process tifs of different sizes for ndvi/evi calculation. Thanks


T3
Comment by Rory Paul on February 11, 2013 at 7:54am

The problem in my opinion is that unless you can guarantee that the data from the different channels are aligned down to the pixel  and that there are no geometric differences between the layers produced during the othomosiac making processes (angle / rotation) when you perform the NDVI calculation you cannot guarantee that the IR and RED data is been drawn from exactly the same pixel and that could mean the difference between looking at the edge of a leaf or looking at dirt. 

Comment by JP on February 11, 2013 at 8:01am

Creating orthomosaics of the same GSD and same AOI from the same flight is the best course of action to guarantee that the pixel bandmath matches. Certain tools have these checks built in. Of course, you'll probably experience more noise from lighting conditions/shadows and/or lack of good normalization routine as well.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2020   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service