Downward facing camera

marcy2_37.jpg

marcy2_38.jpg

marcy2_39.jpg

marcy2_40.jpg

marcy2_41.jpg


The 1st attempts to point the camera straight down were failures.  It can be pointed straight down & it gets the same target in every frame, but these arrangements screwed up either the starting pin or the center of gravity.

It became obvious that the propeller had to thrust on the plane of the center of gravity for the most stability, so all the electronics had to be in the same plane.


Having the wing below the plane hindered all the Marcy class.  Part of the coning angle would be the center of gravity in the wing lifting into the plane of motor thrust.  At  least this showed a winglet should add more stability than a dead weight, by having more mass in the motor plane.

images09


So that was the axial pointing camera revealing new rolling shutter distortion.  Instead of uniform vertical squishing, the top right had stretching & the bottom left had squishing, in both directions.  That is correctable, by knowing the rotation rate & scan rate.

marcy2_42.jpg

marcy2_43.jpg


This was the best arrangement.  Moving it from the wing to the balance beam got the Y axis on the camera to always be parallel with the spin axis.  If the camera was on the opposite side of the spin axis from the wing & on the balance beam, its field of view in its X axis would cover the spin axis instead of looking away from it & its Y axis would always be straight down.

cam85.png



So the camera could look straight down the spin axis just by repositioning it on the plane of rotation instead of building a diagonal harness. 

images10.jpg


Also noticed there was a good chance the attitude could be determined by looking at the takeoff stand's position in the frame.



Anyways, object recognition with SURF was worthless on the 160x240 video.  There seems to be a minimum complexity before SURF detects a feature point.  The resolution isn't high enough to hit that minimum complexity.

images11.jpg

images12.jpg


At flight altitude, the marker is gone.  The AR Drone required much higher resolution just for detecting velocity.

With the camera pointing at the floor, there might be a future in chroma keying.  The floor is more uniform than the ceiling.


Marcy 2 would require covering a large area of the floor in markers.  The field of view is too narrow for a single marker.  A simple grid with different codes in each square would do it.  A flashing laser projected from the aircraft, off axis from the spin axis, might give altitude.  Sonar altitude is too expensive.

 With the camera limitations & frame rates now well known, any kind of object recognition was ruled out.  The only real option was what it was on Marcy 1: color based blob detection.

 

chroma



Setting the camera to only record chroma reveals it detects RGB.  Yellow doesn't show up at all.  Some fluorescent pink posterboard showed up nicely. 

chroma02.jpg



Even with it pointing down at a featureless floor, chroma keying was still a problem.

chroma03.jpg

chroma04.jpg

The onboard camera didn't work as well as the DSLR.  It was in chroma-only mode to get enough information to do chroma keying at 25fps.  The color resolution was 160x240 & the hardware saturation was at maximum, which still allowed a lot of saturation changes.

Surprising how well the blue came out.  Humans must not see it as well as cameras.  Yet it detected pink better than red.



Saturation & hue were still being captured.  Maybe it would go faster if just the hue was captured.

chroma05.jpg


Plotting all the possible chroma values, a 256x256 lookup table emerged, in which chroma was converted to hue.

chroma06.jpg


The result was such a noisy image, it actually increased the bitrate & sent the framerate down to 15.  Not the intended result.

What looks like more differentiation between green & background, the computer actually didn't notice. It still detected the least differentiation with green & the most differentiation with pink.




Anyways, some tests with OpenCV blob detection didn't work.  That requires some training data, which they don't give an example of.  Camshift showed pretty good results.  Red, yellow, & blue showed good separation from the background.  The camshift algorithm has a good detection step for getting the size of the object.

The mane problem is it takes lots of tweeking.  Manual white balance was a must.  Also, it did best with the full RGB data instead of just the chroma.

Full RGB limits you to 80x120 for the chroma & 160x240 for the luminance.  160x240 with full frames buffered, full chroma resolution, & dropped frames started looking pretty tempting.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Moderator

    Very good blog, please keep reporting on your progress, findings, and lessons learned in detail.

  • Excellent Article It reminds me of a falling maple seed up here in Canada I am not Familiar with the Flight characteristics of Marcy But I can appreciate what you are trying to do Does Marcy hoover or climb?Well done 

    Carl

This reply was deleted.