The right tool for the job

There have been recent posts on the “wall” about scientific and “toy” cameras for mapping. The focus is on NDVI which is simply an index that provides information about the difference between reflected red and near-infrared radiation from a target. It's an index because it is unitless and it is normalized so values always fall between -1 and +1. It tends to be a good indication of plant vigor and has been correlated to different aspects of plant productivity.

In any digital camera that I'm familiar with a pixel starts its life as a voltage. The next step is where the the scientific and point-and-shoot cameras diverge. In a scientific camera voltage is simply calibrated to output radiance and in a point-and-shoot camera it follows a more complex processing path to output something pleasing to the human eye. Scientific cameras are trying to measure physical variables as accurately as possible and point-and-shoot cameras are trying to make a good looking photograph – science vs art. Point-and-shoot cameras are more complex than scientific imagers but they use lower quality and mass produced parts to keep the costs down whereas the scientific cameras use precision everything which are produced in low volumes. That's a brief summery but the bottom line is that the two different cameras are designed for different uses. To imagine that a camera designed for making pretty pictures can be used for scientific studies seems a bit ludicrous – or does it? It' depends on what you want to do.

There is a good bit of work going on to try and convert point-and-shoot camera from an art tool to a scientific tool. This is an area that fascinates me. I realize there are serious limitations when working with low quality sensors and imaging systems but some (perhaps many) of those radiometric and geometric imperfections can be modeled and adjusted using calibration techniques and software. For example, there are a few articles in the peer-reviewed literature about people calibrating commercial digital cameras (usually DSLRs) to record radiance and the results are pretty encouraging. I have been developing my own work flow to calibrate point-and-shoot cameras although I'm using simple DIY approaches since I no longer have access to precision lab equipment that would allow me to more accurately characterize my cameras. If anyone is interested I post my calibration experiments on the Public Labs web site ( I'm always looking for feedback to advance this work so comments are welcome. My intent is to convert simple cameras to the best scientific tools that is possible.

When deciding which instrument to use you need to consider the goals of the project and available financial resources. For the financial resources you need to consider purchase cost, maintenance and replacement costs if it gets damaged. There is no comparison from a cost perspective. On the bargain side of scientific imagers you should expect to pay a few thousand dollars and if you want a large format mapping camera it's in the ball-park of $1 million. The precision/scientific-grade cameras are very expensive, require careful maintenance and recalibrating (can also be costly), and if you have one in a UAV that crashed you will likely lose a lot. You can get a used digital camera and convert it to an NDVI capable imager for well under $100 or purchase one designed for mapping like the Mapir for about $300.

What about accuracy, precision and stability? Clearly instruments designed with these qualities in mind will be better than something made to make pretty pictures. A more appropriate question is what is good enough for our purposes? I'll focus on NDVI mapping and it's important to realize different applications (e.g., creating 3D point clouds, ortho-mapping, land cover classification) will have other qualities to consider. One important factor to consider is radiometric accuracy. Although I'm trying to improve what we can get from point-and-shoot cameras I realize I will never attain the accuracy or precision possible with scientific imagers. How important are radiometric qualities for NDVI mapping? In most of the applications I see on this and similar forums people are mostly interested in relative changes in NDVI throughout an image and not absolute NDVI values. Some folks want to monitor NDVI over time and in that case it's important to be able to standardize or normalize NDVI but that is possible with calibration work flows. For these applications a well designed and calibrated point-and-shoot cameras can perform good enough to provide the information required such as to spot problem areas in an agricultural field. One point that is often overlooked is that close-range imaging and NDVI typically do not go well together. The problem is that we are imaging scenes with leaves, stems and soil and at the fine resolution provided by most point-and-shoot cameras we are trying to get the NDVI values from very small areas on the ground. For example, we can see different parts of a leaf and each part of the leaf is angled somewhat differently which will effect the NDVI value. Our scenes tend to be very complex and you can have the most accurate and precise instrument available and you might still be disappointed because of the physical issues (bi-direction reflectance, mixed pixels, small area shadows...) that create noise in the images. It is certainly nice to reduce as many sources of noise as possible but with a scientific camera I'm not convinced (at least not yet) that the improved radiometric performance is significant enough to overcome all of the noise coming from the scene to justify their use.

As far as the Mapir camera I received one of these last week and am trying to set time aside to calibrate it and see how well it performs. My initial reaction is that it is a nice compact camera well suited to small UAV mapping. I would prefer a red dual-pass filter but I expect that and other enhancements will become available in future versions. I like the fact that someone is focused on developing practical low-cost mapping cameras.

I welcome any comparisons between cameras and hope we can work together to improve the output we get from simple inexpensive cameras.  

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –


    • Moderator

      I have a small (about 10 acre) section of orchards that I have been doing regular flights over for a few weeks, in varying conditions. Due to the fires on this side of the country, the last one had dense smoke. 

      I'll see if I can share some of those results and you can give me some feedback on the time of day, etc. I'll also start logging the cloud cover, though I typically only fly on blue-sky days. 

      The idea was to compare fertilizer applications, but some issues pointed out by both of you in emails, as well as here, need to be addressed. 

    • Yes, I am interested in cloud vs canopy self-shade. Within a given area, let's say 20 square feet, there will be less pixel-to-pixel variance in an area shaded by cloud (it will also be more diffuse) than an area imaged when the sun angle is low and there is a good deal of canopy self-shading.  

      I agree that it is better to mask out pixels with high uncertainty. I am also not convinced about the difference index, especially with respect to versatility. However, there may be some literature on the topic and, if not, an opportunity to think of a way to study the matter.

    • Moderator
      Ned, I should have the replacement s100 on Tuesday, and you're familiar with the filter on that camera, as well as the MAPIR. I'd be happy to do what I can with collecting data and passing it on to you and John, if you have some specific requests. I'm lucky enough to have a wide variety of crops that can literally be flown from my back yard.
      I'm also going to try to write a rededge into a grant request, but that is 6 weeks out.
      I'd like to contribute and don't mind being the mule, so let me know if I can help with the data collection.
    • For the shadow question does it matter what the source is? I would expect NDVI to decrease from a theoretical point of view since red light would tend to scatter more than NIR in the shadow area but I'm sure there are other factors at play especially related to filter and sensor efficiency.

      The nice thing about ratios is that many of our problems get divided away and that doesn't happen with difference. Of course that assumes "our problems" effect the red and NIR channels proportionately and that's usually not the case but it's often close. I wouldn't go so far as to say NDVI is insensitive to shadows but that it's less sensitive than a difference index. It would be interesting to test using aerial photos.

      I haven't seen the use of masks much with aerial photo work but at some point it might be better masking pixels with  high uncertainty (like heavy shadows) than giving inaccurate information. 

    • Moderator
      I'm still here and following closely, as I'm sure others are. This discussion is far beyond my current understanding, so I'm sitting in the back of the class and taking notes.
      I'm happy with the discussion, this is exactly the direction I was hoping it would go.
  • information from peauproductions:

    Hello Michael,


    The white balance of the GoPro cameras make it less effective at mapping when you're letting infrared light into the sensor. We started doing NDVI with modified GoPro's but we found that there are better (and less expensive) options available and we took those and created our new MAPIR camera line. These cameras produce far better results when capturing infrared light and I suggest looking at the examples we have up so far.


  • Moderator
    Hey McCollam,

    The MAPIR does have preset wb and ISO modes, AS well as auto.
  • The Mapir camera is a Peau Productions offering. Peau also offers a line of GoPro cameras modified for various uses, including NDVI. It's not clear from information on Mapir's site how close the Mapir is to a modified GoPro, but just looking at the cameras indicates they're of similar pedigree, at least to some extent.

    While it varies over models, in general, white balance adjustment on the GoPro is limited to a number of preset options. Much of the (great) work done by Ned to make consumer cameras more consistently useful depends on the ability to make specific WB adjustment prior to capture, in order to calibrate the camera for differences in lighting during each capture event and across events occurring at different times.

    It would be good to know how WB adjustment works on the Mapir. Is WB adjustment on the Mapir similar to what's available on the GoPro? Would love to hear more.

    Great work Ned, as usual! 


    • I think the Mapir camera is a modified SJCAM and it has the same form factor as a GoPro I think. The white balance settings are: Auto, Daylight, Cloudy and Tungsten. There is no custom white balance as far as I can tell.

      The latest calibration options in the photo monitoring plugin shouldn't be impacted by color balance although only testing will tell if that's an accurate statement. I don't recommend auto white balance if using the calibration work flow since the white balance is set independently for each photo based on scene composition. The other white balance settings should remain constant since they are based on illumination source blackbody properties - at least that is my understanding. 

  • Great write up, thanks Ned. Keep it coming !

This reply was deleted.