The right tool for the job

There have been recent posts on the “wall” about scientific and “toy” cameras for mapping. The focus is on NDVI which is simply an index that provides information about the difference between reflected red and near-infrared radiation from a target. It's an index because it is unitless and it is normalized so values always fall between -1 and +1. It tends to be a good indication of plant vigor and has been correlated to different aspects of plant productivity.

In any digital camera that I'm familiar with a pixel starts its life as a voltage. The next step is where the the scientific and point-and-shoot cameras diverge. In a scientific camera voltage is simply calibrated to output radiance and in a point-and-shoot camera it follows a more complex processing path to output something pleasing to the human eye. Scientific cameras are trying to measure physical variables as accurately as possible and point-and-shoot cameras are trying to make a good looking photograph – science vs art. Point-and-shoot cameras are more complex than scientific imagers but they use lower quality and mass produced parts to keep the costs down whereas the scientific cameras use precision everything which are produced in low volumes. That's a brief summery but the bottom line is that the two different cameras are designed for different uses. To imagine that a camera designed for making pretty pictures can be used for scientific studies seems a bit ludicrous – or does it? It' depends on what you want to do.

There is a good bit of work going on to try and convert point-and-shoot camera from an art tool to a scientific tool. This is an area that fascinates me. I realize there are serious limitations when working with low quality sensors and imaging systems but some (perhaps many) of those radiometric and geometric imperfections can be modeled and adjusted using calibration techniques and software. For example, there are a few articles in the peer-reviewed literature about people calibrating commercial digital cameras (usually DSLRs) to record radiance and the results are pretty encouraging. I have been developing my own work flow to calibrate point-and-shoot cameras although I'm using simple DIY approaches since I no longer have access to precision lab equipment that would allow me to more accurately characterize my cameras. If anyone is interested I post my calibration experiments on the Public Labs web site (http://publiclab.org/). I'm always looking for feedback to advance this work so comments are welcome. My intent is to convert simple cameras to the best scientific tools that is possible.

When deciding which instrument to use you need to consider the goals of the project and available financial resources. For the financial resources you need to consider purchase cost, maintenance and replacement costs if it gets damaged. There is no comparison from a cost perspective. On the bargain side of scientific imagers you should expect to pay a few thousand dollars and if you want a large format mapping camera it's in the ball-park of $1 million. The precision/scientific-grade cameras are very expensive, require careful maintenance and recalibrating (can also be costly), and if you have one in a UAV that crashed you will likely lose a lot. You can get a used digital camera and convert it to an NDVI capable imager for well under $100 or purchase one designed for mapping like the Mapir for about $300.

What about accuracy, precision and stability? Clearly instruments designed with these qualities in mind will be better than something made to make pretty pictures. A more appropriate question is what is good enough for our purposes? I'll focus on NDVI mapping and it's important to realize different applications (e.g., creating 3D point clouds, ortho-mapping, land cover classification) will have other qualities to consider. One important factor to consider is radiometric accuracy. Although I'm trying to improve what we can get from point-and-shoot cameras I realize I will never attain the accuracy or precision possible with scientific imagers. How important are radiometric qualities for NDVI mapping? In most of the applications I see on this and similar forums people are mostly interested in relative changes in NDVI throughout an image and not absolute NDVI values. Some folks want to monitor NDVI over time and in that case it's important to be able to standardize or normalize NDVI but that is possible with calibration work flows. For these applications a well designed and calibrated point-and-shoot cameras can perform good enough to provide the information required such as to spot problem areas in an agricultural field. One point that is often overlooked is that close-range imaging and NDVI typically do not go well together. The problem is that we are imaging scenes with leaves, stems and soil and at the fine resolution provided by most point-and-shoot cameras we are trying to get the NDVI values from very small areas on the ground. For example, we can see different parts of a leaf and each part of the leaf is angled somewhat differently which will effect the NDVI value. Our scenes tend to be very complex and you can have the most accurate and precise instrument available and you might still be disappointed because of the physical issues (bi-direction reflectance, mixed pixels, small area shadows...) that create noise in the images. It is certainly nice to reduce as many sources of noise as possible but with a scientific camera I'm not convinced (at least not yet) that the improved radiometric performance is significant enough to overcome all of the noise coming from the scene to justify their use.

As far as the Mapir camera I received one of these last week and am trying to set time aside to calibrate it and see how well it performs. My initial reaction is that it is a nice compact camera well suited to small UAV mapping. I would prefer a red dual-pass filter but I expect that and other enhancements will become available in future versions. I like the fact that someone is focused on developing practical low-cost mapping cameras.

I welcome any comparisons between cameras and hope we can work together to improve the output we get from simple inexpensive cameras.  

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Ned, Great write-up on difficult theme. I have some additional points on calibration:

    John Sulik, you mention a few lines back “Teflon works well up to 1000nm if it is elevated far enough above ground to allow light underneath it to diffuse. I just use brushed aluminum spray painted light grey to ensure pixel values do not saturate.” : I suppose you elevate the Teflon target in order for the transmitted light not to be reflected back by the ground beneath and messing up the reference target reflectance?

    Why not using 2, 3, 4 pieces of Teflon on top of each other so nothing goes through? Also, related to the grey spray. Does the target keeps being lambertian and do you have some spectra to share?

    A spectralon target would off course be ideal as reference but is too small, especially if flying at high altitudes. Even so, which reference target size do you consider to be “large enough” in terms of GSD (ground sampling distance) in order not to be contaminated with scattered light from background pixels? 

  • Thank you all for responding to the normalization questions.  So, if I wanted to setup a workflow for normalizing an image I would:

    1. Clip the image to a polygon to remove all pixels outside my field of interest like gravel roads, any water, etc.  All I want is my field.

    2. I should now have an image with pixel values from 0-255 of only my field.  Correct?

    3. Calculate NDVI map = NIR-Red / NIR+Red.  (Do not use blue due to carotenoid issue.)

    4. I should now have a normalized plant vigor map.

    5. ?? In the above steps should be an equation for the pixel values from the reflectance target.  How do I use the values?  Do I add or subtract the target pixel values from or to the field pixel values?

    6. Luis mention an incident light sensor.  This sensor would record incoming energy from the sun and atmosphere.  How are these values incorporated?  What's the equation?

    I can't tell you how helpful this discussion has been. 

    Many thanks!!

    LW

    • LW -  It's probably worth noting that there is temporal and spatial normalization. With temporal normalization the goal is to normalize images so that missions flown at different times (times of the day or days of the year) will have similar (simulated) incoming solar radiation (irradiance). Spatial normalization is focused on normalizing across an image or across a mosaic of images. When working with flat even-canopy fields spatial normalization probably isn't much of an issue as long as the cloud and atmospheric conditions are fairly even across the area during the entire mission. 

      With regard to your workflow the normalization should probably be step 1. The method John Sulik noted would work. He is an example based on John's process. I'll use T1 and T2 for two imaging missions. The same methods can be extended to include additional time periods. For both T1 and T2 you need to have a target that you use for the reference. You also need to select one of the time periods to be the reference to which images acquired during other missions will be normalized to. For this example I'll select T1 to be the reference. For all missions you need to make sure you have the reference target in an image. Since the reflectance properties of the reference target do not change over time we are working on the assumption that if the pixel values for the target change it is due to changes in irradiance. If the average pixel value for the target was 100 on T1 and 150 on T2 then all of the photos acquired during T2 would be divided by 1.5. Does that make sense?

      Using a sensor to measure irradiance you would use a similar approach for normalization. You would adjust image pixel values based on the proportional difference in irradiance (as measured by the sensor) between a reference and all of the other photos. 

    • Hi Ned

      The temporal difference between T1(reference) and T2 does make sense.  

      An issue I see with the target(s) is how many will you need?  If you are flying at 400' over a 50 acre field with a Micasense multispectral camera (or some other camera) you will need many reference targets so that each image will have a target in its Field Of View.  That gets costly both in money and time laying out targets.  The incident light sensor on top of the aircraft makes more sense to me.  If you work in the southeast US you'll always have cloud cover during the day due to the relative location of the Gulf of Mexico and prevailing winds.

      I think the Right Tool For the Job here would be the incident light sensor tagged to each image.  The irradiance value from the sensor would be divided into the image radiance values to get reflectance.  Is this correct?  But not sure I really want reflectance if I'm looking at NDVI?

      Again, I appreciate all this knowledge and feedback.  I'm learning.

      LW

    • Hi LW -

      Imaging under clouds is less than ideal no matter which method you use for normalization. There are some methods that we use for satellite remote sensing to reduce the impact of haze in an image but I've never tried to apply those methods to aerial photos although that should be possible with a multiband sensor like the Micasense.

      For NDVI you ideally want surface reflectance but that has to be estimated/modeled since you're imaging from 400'.

      Ned

    • Moderator
      I believe that John linked this earlier, but this article from Pix4D has some great explanations on necessary workflow steps, as well as comparisons between converted cameras and multispectral cameras.

      Even if you're not using or planning on using pix4D, you should read through this-https://support.pix4d.com/hc/en-us/articles/204894705-Camera-Requir...
      Camera Requirements for Precision Agriculture
    • Do the Mapir cameras have RAW output? According to the Pix4D guide they won't be any good without RAW output.

    • Moderator
      Even with the S100, I never use raw, seemingly due to ignorance. I haven't looked for it on the MAPIR, but I don't believe so. I believe the converted cameras still have a use if you go through the radiometric calibration described and are honest and aware of the shortcomings.
      I just found a sheet of brushed aluminum that I was going to set up as John described for FFC, but I'm going to try to find a calibrated target as well.
      I don't have access to a micrasense yet, but I'll compare the MAPIR to the S100s raw outputs, if I can get the new s100 Working.
    • From the Pix4D guide you linked to:

      "For filter­-modified consumer cameras, it is imperative not to use the JPG produced by the camera. The values have been heavily distorted to be visually appealing and this process is not reversible. Such an image will lead to heavily stretched and distorted NDVI. However, the Raw images (in whatever format, .raw, .dng, ...) do preserve the radiometric linearity and should be used instead of the JPG."

    • Moderator
      If there is a way to account for the distorion created in jpegs for each camera, if it is even consistent, and be able to output a less accurate NDVI result, that is at least an option.
      If this is entirely unrealistic and impossible, then I'm confused as to why so many of the well funded data processing or collection services are either not able to produce RAW images, or don't use them/allow uploads. I'm ok with using less accurate options as long as the issues are conveyed to the customer. If it's entirely unreaalistic to use them for NDVI, I'd think we could find another use.
      Sorry, I feel like I'm treating this thread as a diary right now.
This reply was deleted.