Hi! I got the arducopter kit with the ultimate aim of using it for aerial imaging.
Been flying the arducopter for a while indoor. Had two crashes with one resulting in propeller cut in half and the other twisting the aluminum arms. Had to replace the aluminum frame with similar material I got from a local aluminum windows and door frame maker shop. As for the propeller replaced all with the sturdier 10x4.7 APC props and the rig flies ok.
I am only getting 4minutes tops for a 2200 20C battery.
Today, I took the next logical step of finding out if the camera I've identified for the aerial imaging can be lifted by the arducopter. Two d-340R cameras with batteries plus the arducopter resulted in a total flying weight of 2kg (might be a little heavier by a couple of grams).
The result, I have to throttle up to nearly 75% for the quad to lift from the ground and hover for a couple of seconds very close to the ground. At his point though, the battery monitor start turning red so have to stop the test. The ultimate envisioned set up is to remove the battery from the cameras and source out the battery requirement of 6.5v from the lipo. That should shave off some weights from the current 2 kg.
QUESTION: You think the 20C does not have enough juice to make the arducopter hover? Correct me if I'm wrong but with a higher C rating on the battery, the throttle stick can be lowered to accomplish the same motor performance, right? I mean with a 20C and 75% throttle, the quad lifts from the ground. Now if I have a 25 or 30C battery, I will definitely not need to put the throttle to 75% to accomplish lift off right?
Actually as per motor and esc spec, 20C is the lower possible c rating and 30 being the maximum.
Thanks for the confirmation on the battery C relation to higher voltage requirement.
Correct me if I'm not getting this right but the infrared image will be obtained with the camera IR filter removed and replaced with a filter that blocks Red (not totally though), Green and Blue part of the spectrum. Processing of acquired infrared images will necessitate conversion of individual pixel infrared values to reflectance values by expressing individual digital number (0-255) relative to the amount of near infrared energy reflected off standard surface with "known" reflectance values. The same thing will be done with the red image of the RGB image. Definitely, the resulting NDVI values will not have the precision of an ASD fieldspec spectroradiometer but the result, I reckon, will have real time practical relevance for real time vegetation diagnosis and monitoring. Regardless of how the bayer filter represent a composite image, working on the individual RGB components may prove to be useful.
But then again, you might be right :(
Sorry I didn't mean to be 'right' or disissive so sorry if I sounded a bit negative.
Your understanding sounds right to me but I think there's a couple of further considerations;
An automatic camera will choose a shutter speed and aperature dependant of the light conditions for each shot. This probably renders useless your reflectance calibration before/after flight unless you can do something very clever adjustments form the EXIF data or have a fully manual mode.
You have three channels in the IR modified camera, all with different sensitivity in IR due to the Bayer filter. Will you use just one and lose up to 50% of your resolution? Or will you try to combine them? Also the RGB camera RED channel has only 50% of the resolution.
It's still a cool thing to play with.
It is indeed cool to play with and don't worry about sounding negative ... I appreciate getting all the insights prior rather than later.
That being said, I thought for every pixel on a CCD, a bayer filter can be found as oppose to a single bayer filter being overlain on the entire CCD sensor. So if one got a 10 megapixel camera, the entire 10 million pixels will have a corresponding pair of bayer filter. If that is right, then you don't lose any resolution because using the measure of reflectance, do not involve the other spectral band. So even if green and blue is there they are useless since NDVI in this case do not involve them. Only the red on the normal camera and infrared on the modified one are involve.
The entire CCD is inherently differentially inefficient anyways. If we really need to look at efficiency of detecting specific RGB band, the individual pixel is 50, 25, 25 percent efficient in detecting the Green, red and blue. The normal image you are seeing is a result of variable efficiencies in detecting the three primary colors. The only challenge I see is the cut-off band for the red and infrared overlapping where the red upper wavelength goes beyond the lower wavelength of the modified camera infrared losing/diluting in the process the precision/efficiency with which the physical phenomenon of plant "health" is measured. As to shutter speed and aperture, at least somebody found a way to control those two parameters via interface with a palm pilot for the camera I’m using. I still have to overcome the challenge of remotely controlling the palm pilot handheld camera trigger though.
Images taken with modified near infrared camera will contain 0 pixel values for the green and blue images. This is because the filter that replaced the hot filter only allow infrared and blocks the visible. So splitting an RGB image to its corresponding Red (actually infrared as the filter is already modified), Green and Blue will result in a black image with 0 values for all pixels. The red/infrared layer however will have 0 to 255 digital numbers. At least that’s the theoretical scenario I understand.
Although reflectance values will be computed after flight, values needed for the computation are all taken at the moment the images are taken. With standard reflectance card or canvass placed in the area of imaging. The after-flight calibration therefore will not be useless, I think.
Wish I know your background.
"Images taken with modified near infrared camera will contain 0 pixel values for the green and blue images."
I think that you will find (as I did when I first tried it) that each of the RGB channels, not jut the RED collect a lot of NIR light. It seems that the Bayer filter for say GREEN blocks visible blue and red but lets through NIR, likewise for the BLUE. Of course you don't have to use these channels.
I'd be interested to get a definative answer of the loss of resolution thing. I was pretty sure that a 10mp camera would have 2.5 million red pixels....
My background is partner in a UAV operating company. Former UAS evaluator and commercial pilot. I'm defenitely not a boffin!
English is not my first language so have to look up boffin. Looks like a badge of honor at the beginning but degenerated to something not good. By all means I did not mean to question your background but was curious where you are coming from. Turns out you have all the credentials :)
It turns out four pixels (2x2) will compose a single RGB value on the image. These four pixels have the bayer filter on them 2 greens and 1 each for red and blue. So I guess for a 10MP camera, the number of bayer filters will be dividing 10 million by 4 at the least. So when separating the RGB, without additional processing, at most 75% of red layer will be without information. The same will be true for the blue but the green layer will have 50%.coverage. With demosaicing algorithm, interpolation between neighboring pixels will happen and there you go, an image with pixel values between 0 and 255.
So I was wrong assuming there's a 1:1 correspondence between pixel and bayer filter ... its actually 4 pixels is to 1 at the least. Using a filter though will not decrease resolution because you will be using everything the CCD can supply.
Now on the aspect of a pixel with a blue filter showing IR values, that does not make sense. If that is the case then green values will also be showing up as infrared wavelength is higher after green if one looks at the visible spectrum. It must be electronic noise or heat from the electronics showing up as the incident IR will be filtered by the blue filter. The good thing with having a standard included in the imaging is one could potentially apply a constant to the pixel values to account for it.
You have it roughly correct, but there are different infrared filter methodologies.
All of them involve removing the NIR-blocking filter.
Simply trying to maximize real radiometric and spatial resolution would lead you to add a high-pass filter at 720-850nm, creating a camera that only detects NIR. The different subpixels would be added together afterwards to create a monochrome image. Unfortunately, this creates problems for automatic exposure settings and autofocus.
Next, you could use a blue-blocking filter (often a simple red/orange filter) with transmittance high into the NIR. While green and red bayer filters have only slightly dissimilar spectral responses, the blue bayer filter's peak is well away from them. It already blocks out red very well by design necessity, but it often has considerable transmittance in the near infrared. A camera like this allows you to capture both red and near infrared in one image, making NDVI metrics possible. This is how the ADC Tetracam that I have limited experience with works, comparing the red and blue channels through a filter that passes both red and NIR.
I believe that various tricks of white balance and re-mapping channels are normally used to create *photographic* false color infrared, usually in order to make the sky a deep blue while vegetation shows up brightly against it. In multispectral satellite imagery, where these problems with using existing Bayer matrices don't exist, we just re-map at the green, red, and NIR bands for false color.
Ad and Squalish,
Thanks for keeping at it with the discussion :)
Ad and Squalish,
Thanks for keeping at it with the discussion :)
I am attaching files (refer to bottom right for acknowledgment) as visual aid.
The impact of the bayer filter on the image being taken by a camera with the infrared filter removed is NOT negated apparently. So splitting the image Ad shared to RGB resulted in very similar layers and subtracting from one image layer to another illustrated this similarity as pixel value differences were small. If one looks at the histogram of the pixels also confirms the similarness (figures 1 and 2).
The explanation was pointed by Squalish as the transmission curve of the bayer filter has two peaks relative to green and blue (figure 3). With the infrared filter not removed, one is able to distinctly identify pixels with green, blue and red as the second peak of the green and blue do not show up. Without the infrared filter the second peak shows up resulting in what Ad was insisting, and rightly so, that the green and blue layer do show infrared values.
I have been trying to locate an ADC camera but apparently it went out of production!
So, moving forward, will removing the infrared filter of a digital camera and putting in a pass through filter between 720 and ~825 (not 850) have practical utility in coming up with vegetative index like NDVI? I've indicated not 850 since beyond ~825, the 4 bands of infrared, red, green and blue starts to merge again.
figure 1. Histogram of the blue, green and red pixel values of image provided by Adrian.
figure 2. Histogram of the red, green, and blue image subtraction done in Idrisi for windows GIS package.
figure 3. Transmission curves of a typical CCD.
yes, the battery alarm in form of a buzzer was sold out.
this afternoon i am buying one too :)
How about larger props? I use 12x3.8 APC props with Turnigy 2217 850kv motors and my Quad flies okay. It is 2.0 kg without any camera. I use a 4000mah 20C battery - the max discharge rate therefore is 80A. Each of the motors rarely even get close to 20A (they peak at 17A), so power is simply not a problem, even at full throttle. I can hover at about 50% throttle, however.
Because of the extra weight, I don't get as long as flight times and she's not as nimble as others. However, she is built like a tank and scrapes off minor crashes without a shrug.
There seem to be a consensus to get a bigger prop when one upgrade to a more powerful motor. It will be useful to everyone is there's some kind of a table or matrix indicating which motor and prop combination do members of the arducopter community used or using in addition to the standard motor+prop combination of the kit.