The right tool for the job

There have been recent posts on the “wall” about scientific and “toy” cameras for mapping. The focus is on NDVI which is simply an index that provides information about the difference between reflected red and near-infrared radiation from a target. It's an index because it is unitless and it is normalized so values always fall between -1 and +1. It tends to be a good indication of plant vigor and has been correlated to different aspects of plant productivity.

In any digital camera that I'm familiar with a pixel starts its life as a voltage. The next step is where the the scientific and point-and-shoot cameras diverge. In a scientific camera voltage is simply calibrated to output radiance and in a point-and-shoot camera it follows a more complex processing path to output something pleasing to the human eye. Scientific cameras are trying to measure physical variables as accurately as possible and point-and-shoot cameras are trying to make a good looking photograph – science vs art. Point-and-shoot cameras are more complex than scientific imagers but they use lower quality and mass produced parts to keep the costs down whereas the scientific cameras use precision everything which are produced in low volumes. That's a brief summery but the bottom line is that the two different cameras are designed for different uses. To imagine that a camera designed for making pretty pictures can be used for scientific studies seems a bit ludicrous – or does it? It' depends on what you want to do.

There is a good bit of work going on to try and convert point-and-shoot camera from an art tool to a scientific tool. This is an area that fascinates me. I realize there are serious limitations when working with low quality sensors and imaging systems but some (perhaps many) of those radiometric and geometric imperfections can be modeled and adjusted using calibration techniques and software. For example, there are a few articles in the peer-reviewed literature about people calibrating commercial digital cameras (usually DSLRs) to record radiance and the results are pretty encouraging. I have been developing my own work flow to calibrate point-and-shoot cameras although I'm using simple DIY approaches since I no longer have access to precision lab equipment that would allow me to more accurately characterize my cameras. If anyone is interested I post my calibration experiments on the Public Labs web site (http://publiclab.org/). I'm always looking for feedback to advance this work so comments are welcome. My intent is to convert simple cameras to the best scientific tools that is possible.

When deciding which instrument to use you need to consider the goals of the project and available financial resources. For the financial resources you need to consider purchase cost, maintenance and replacement costs if it gets damaged. There is no comparison from a cost perspective. On the bargain side of scientific imagers you should expect to pay a few thousand dollars and if you want a large format mapping camera it's in the ball-park of $1 million. The precision/scientific-grade cameras are very expensive, require careful maintenance and recalibrating (can also be costly), and if you have one in a UAV that crashed you will likely lose a lot. You can get a used digital camera and convert it to an NDVI capable imager for well under $100 or purchase one designed for mapping like the Mapir for about $300.

What about accuracy, precision and stability? Clearly instruments designed with these qualities in mind will be better than something made to make pretty pictures. A more appropriate question is what is good enough for our purposes? I'll focus on NDVI mapping and it's important to realize different applications (e.g., creating 3D point clouds, ortho-mapping, land cover classification) will have other qualities to consider. One important factor to consider is radiometric accuracy. Although I'm trying to improve what we can get from point-and-shoot cameras I realize I will never attain the accuracy or precision possible with scientific imagers. How important are radiometric qualities for NDVI mapping? In most of the applications I see on this and similar forums people are mostly interested in relative changes in NDVI throughout an image and not absolute NDVI values. Some folks want to monitor NDVI over time and in that case it's important to be able to standardize or normalize NDVI but that is possible with calibration work flows. For these applications a well designed and calibrated point-and-shoot cameras can perform good enough to provide the information required such as to spot problem areas in an agricultural field. One point that is often overlooked is that close-range imaging and NDVI typically do not go well together. The problem is that we are imaging scenes with leaves, stems and soil and at the fine resolution provided by most point-and-shoot cameras we are trying to get the NDVI values from very small areas on the ground. For example, we can see different parts of a leaf and each part of the leaf is angled somewhat differently which will effect the NDVI value. Our scenes tend to be very complex and you can have the most accurate and precise instrument available and you might still be disappointed because of the physical issues (bi-direction reflectance, mixed pixels, small area shadows...) that create noise in the images. It is certainly nice to reduce as many sources of noise as possible but with a scientific camera I'm not convinced (at least not yet) that the improved radiometric performance is significant enough to overcome all of the noise coming from the scene to justify their use.

As far as the Mapir camera I received one of these last week and am trying to set time aside to calibrate it and see how well it performs. My initial reaction is that it is a nice compact camera well suited to small UAV mapping. I would prefer a red dual-pass filter but I expect that and other enhancements will become available in future versions. I like the fact that someone is focused on developing practical low-cost mapping cameras.

I welcome any comparisons between cameras and hope we can work together to improve the output we get from simple inexpensive cameras.  

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

    • John

      You used a reference panel to 'flat-field' correct all your images.  Would you elaborate on that a little.

      Thank you!

      LW

    • Use GIS or RS software to draw a polygon/roi inside the panel (excluding the edges to avoid mixed pixels). For each band, average the pixel values inside the polygon. Next, for each band, divide that averaged value into the rest of the pixels.

    • I usually think of normalization as a relative adjustment and calibration as absolute. In other words calibration transforms pixel values to a physical value such as radiance or reflectance and normalization transforms pixel values so they have the same radiance scale even though the absolute radiance values aren't known. 

      Measuring incident light as noted by Luis Chavier is one way to calibrate (if the camera sensor characteristics have been measured) or normalize images. 

      Most of the images I have seen on this forum are of agriculture fields which are level with even aged (same canopy height) crops which is ideal but if you are working in areas with terrain or an uneven canopy the geometry can get quite complicated and although the calibration and normalization approaches discussed here are still important the results are typically not as nice without additional processing.

    • I've been wondering if this kind of normalization could be done using a sensor installed on the drone. It could measure incident light hitting the top side of the drone during flight, in different bands. This data could be timestamped and later tied to the images being taken in order to compensate incident light via software.

      It seems to me it would work well if the images are always taken under a clear sky or under uniform cloud cover. And a single-point sensor shouldn't be too expensive.

    • Martin,

      1) It accounts for light levels by normalizing imagery to a common reference. The panel is serving as a proxy for irradiance (reflectance = radiance/irradiance); however, it is not total irradiance unless it is made out of spectralon or painted with barium sulfate. Teflon works well up to 1000nm if it is elevated far enough above ground to allow light underneath it to diffuse. I just use brushed aluminum spray painted light grey to ensure pixel values do not saturate.

      2) You can stitch all images collected under the same illumination conditions. If varying illumination conditions are present during your flight then the denominator in a normalized difference index may do a fair job of accounting for differences in illumination.

      If the same panel (in the same condition; no dirt or paint rubbed off) is present across time then you can compare images across time. The pixel values will be anchored to the panel you use, not absolute reflectance.

    • John, Ned et alia, like LW and I'm sure many others I'm following this discussion with interest.

      I would like to understand more about the calibration, flat field image normalisaton process, if you have the time to explain.

      1. Is the callibration required to standardise the images taken with varying light levels, or to calibrate it to a particular crop/vegetation type?

      2. Is the flat field/image normalisation required to standardise the images used to stitch together a larger image, or to standardise a stitched image to allow it to be compared with other stitched images, maybe taken on another day with a different light level?

      Apologies if my questions are a bit basic

      Regards

      Martin

    • If there is interest in this sort of image normalization I can integrate it into the FIJI photo monitoring plugin. It could be set up as a two-step process similar to what I use for calibration. The first step would be to calculate the band adjustment factors and the second step would be to apply those factors to a set of images from the same mission. 

    • Great stuff – I hate to think I'm partially responsible for the popularity of blue filters. My first NIR camera was a SuperBlue conversion from LifePixel so perhaps I was over zealous at the time. The last couple years or so I've been pushing the red filters on Public Labs but maybe the damage was done in the early days.

      I remember the post you linked to but I didn't buy the rational for using DVI. I added the ability to create DVI images to my ImageJ plugin as a result of the lively discussion on this forum about that post. The shadow behavior is bizarre but it also seems bizarre to me with DVI it's just not as obvious since the values are artificially low instead of high. If shadows are unwanted they can be masked. In light shadows NDVI does reasonably well but DVI will always give lower values. For example if the reflectance from a well illuminated field is 0.7 for NIR and 0.1 for red the NDVI will be 0.75 and DVI will be 0.6. If there is a light shadow in part of the scene the light reflected off the field is, say, cut in half (NIR=0.35, red=0.05 as measured by the camera) then NDVI is still 0.75 but DVI is 0.3. I realize the proportion of red light goes up a bit relative to NIR in shadows but I'm working on the assumption that it's not that different. Continuing that logic I think NDVI can cope with atmospheric attenuation and even effects like vignetting better than DVI but I haven't tested that. I'm assuming vignetting effects the red and NIR bands proportionally the same but have no idea if that's actually the case. Does this logic seem sound?

      Lastly, it seem as if it's just you (John) and me in this discussion but I hope this dialog is useful to other folks or we can take it off-line if you're willing. I'm not so good with proper protocol on these lists. In any case I appreciate your feedback.

    • Let's keep the discussion online. 

      Shadow:  Are you referring to cloud shadow or the canopy self-shading? Also, a very very very smart engineer who knows a lot about multispectral cameras and UAVs made a comment to me once about how NDVI can be thrown off (I think increased) by shadows but I think this matter requires a well-structured experiment if we want to better understand exactly how shadows are affecting vegetation indices implemented using different arithmetic. I have used band ratios of visible bands for imagery acquired with a Canon at 400 feet and the pixel values correlated very well with ground reference data I collected so I am not sure that doing a subtraction instead of a division is always necessary. 

      Vignetting:  I see this more with front-mount filters and/or fish-eye lenses and in those cases I would assume that it affects all bands proportionally because it is a function of radially-varying attenuation of incoming light (how many photons are hitting each pixel). I am not sure about atmospheric attenuation but I have experience that NDVI copes very well with scattered cloud cover.

    • I'm justing catching up to this discussion.  I just posted something in another thread that sounds relevant here:

      OK, so why are there anomalous NDVI or DVI values in some shadows? It's because healthy plants with a good leaf structure reflect NIR light so efficiently - more so than any other wavelength.  It's the leaf structure that reflects the NIR. This is to help the plant stay cool. So where does that scattered NIR light go? It goes everywhere!  Including the shadows. So if we follow the path of some NIR photons, they would scatter from a nice healthy plant, into shadows, maybe some absorbed but others  reflecting back up to the sensor.  The effect is that you are contributing to the NIR apparent reflectance from the shadows.  And since there isn't much signal coming from shadows anyway, it doesn't take much to get an increased NIR signal from the shadows and that would account for your anomalously high NIR-vis values in dark areas.  

      I learned all this in class but was really struck by this when I was working with some low altitude hyperspectral data where there were some bushes and grass next to a road. I converted the data to reflectance and then extracted a reflectance spectrum from the center of the road. I was amazed by how much that road reflectance spectrum looked like vegetation reflectance! There was a strong NIR signal. I was 4 or 5 pixels away from any veg but the scattered NIR light from the veg contributed strongly to the signal of the dark road. Same would happen with shadows as there is not much signal coming from them either.

      Cheers,

      Joe

This reply was deleted.