Hi,
I’m taking raw images for my research.
I was wondering if anyone here knows a good software to convert raw NIR images to Tiff.
And what kind of corrections I need to apply on my images?
I’m doing temporal analysis, and I only need the NDVI values.
what kind of corrections (geometric/radiometric) I need to apply on my images, as pre-processing, before I make an orthomosaics?
How can I convert the digital numbers of my camera to reflectance values?
I’d be very glad if you can answer my questions.
Replies
Julie, It sounds like you have a good design with regard to image acquisition. It's great that you used a white balance target with different gray levels. As far as applying corrections I'm not sure what would be best. Since you would be applying the same correction to all of the images from a single mission it seems like applying the correction to the orthomosaic would be fine. The only possible problem I can think of off the top of my head is that with pixel interpolation and artifacts that will be in the final mosaic you might get some spotty odd results. It shouldn't be too difficult to try both methods (easy for me to say) and reporting on the comparison would have value.
Ned
Thanks Ned,
I will keep you updated about my research if you're interested.
And thank you sooo much for suggesting dcraw, I'm liking it.
Thanks
Hi Julie,
I'll try to provide some insight into your questions. The best software that I'm aware of to convert RAW images is DCRAW. You have to be a little careful with software that converts raw images to other formats since nearly all do some sort of corrections. With DCRAW it is possible to get a nearly exact copy of what was recorded by the sensor and as a result, due to the Bayer pattern on most camera sensors, the resulting image will look a bit odd. To convert the Bayer image to an RGB image I use a debayering plugin in ImageJ/Fiji. That plugin offers some pixel averaging options to convert the bayer pattern image to a 3-band image that you can use to calculate NDVI. The process I use is describe in a Public Lab research note: http://publiclab.org/notes/nedhorning/06-23-2014/calibrating-raw-im...
I'm planning to update that note when I return from the field at the end of the month to describe updates on the calibration process I've been working on.
If you go through the trouble of recording raw images I would hesitate to apply a white balance. The raw image has a linear response to reflected light (assuming no atmospheric issues) and once you apply a white balance that linearity no longer exists.
The goal of your work is interesting and I'm curious if the DIY cameras will provide sufficient information. Please report back if possible. You might want to look into alternative/complimentary methods such as using PhotosynQ : http://publiclab.org/notes/cfastie/05-22-2015/multispeq-at-ifarm
Ned
Thanks Ned,
The public Lab note seems really helpful,
Does DCRAW have batch processing? I guess I can look into it when I install it on my pc.
And do you think I should be looking into converting to reflectance?
On one hand, I think I'm only doing comparison and then on the other hand I'm taking temporal data, so maybe I need to normalize my data since I'm comparing values taken in different days.
I'm thinking to take readings from the white board panels that I have in my images and then compute the reflectance values manually. It's just a thought for now.
Thanks.
Julie, DCRAW is a command line utility so batch processing shouldn't be too difficult. I agree that normalization is more important than calibration to reflectance if you are looking for relative changes between missions. Using your white target is good but it would be best if you can use a darker target as well so you have at least two points to do the normalization. Look for invariant targets that you would expect to have the same reflectance on each of your missions. If that's not possible or practical you might be just fine using only the white target. For each of your missions try to keep as many parameters as possible the same (e.g., shutter speed, focal length, sun angle, atmospheric conditions, flying height, photo orientation, camera location for each shot...). You won't be able to control everything but in general the more similar your flights are the better off you will be. Good luck. Ned
This video probably explains a lot of the questions you asked. You need not apply geometric changes, because the calculation model for photogrammetry contains a matrix of camera parameters that describe distortion, focal length, etc. In fact, it's better to leave those in so the calculation model has some room to fit the data.
I have little experience regarding radiometric data, but obviously this is what the video talks about. The difficulty here is that to do this correctly, you need to characterize your camera (similar how you calibrate it for geometric deficiencies). If this was converted by MaxMax, then they should be able to help you out further or perhaps have a forum there.
You should check out what agribotix have done in their research too: http://agribotix.com/blog/2014/6/10/misconceptions-about-uav-collec...
Maybe it helps:
http://diydrones.com/profiles/blogs/new-ngb-filters-and-higher-reso...
Thanks,
Well it doesn't help me with the current dataset that I have collected, but the filter seems interesting, how does it in separating vegetation from soil, or water?
The sample pictures on the website are all vegetation and buildings.
Hi!
Basically, the filters are determining what you see or not, regarding their wave lenght filtered.
There a lot of filters/wave length combinations. Does need that you determine your goals to search specifically.
Maybe, Event38 (www.event38.com) staff could help you about.
Good luck!