Evolution of a position tracker


vision36.jpg


So the red/blue marker failed when flying over an upward facing light caused all color to be lost.  Sensor saturation makes any color based marker unworkable. 

There is a glitch prone way to fix the white balance on the webcam.  It has to be fixed before capturing the 1st frame.  That made the LED white always come out white & the CFL ambient light turn yellow.  It was potentially an easy way to separate out the background until you realized CFLs are being superceded by white LEDs.

vision37.jpg

If the lighting pointed down, the whites couldn't be separated from the CFL & colors were still lost.  The webcam can still knock out the color saturation with an ND filter.


Next would be flashing the LED.

vision43.jpg


vision38.jpg

vision39.jpg

An anti static bag as an ND filter got it down enough to resolve color from LEDs. It actually seemed robust enough to handle different distances.


vision40





Without paper


vision41

With paper.



vision42

The best arrangement has the LEDs in opposing directions & colored paper.  The paper adds more coverage.  It's really a slight difference.  Without paper, it's a lot lighter, but any production version would use paint.

A human looking at this thinks there must be a way to detect the inner red edge & extrapolate a circle.

It's not clear why the color facing out is overwhelmed by the color facing in or why the blue has a red outline.  The shape while rotating is completely different than stationary.

A hard edge is required to get an accurate semicircle.  So another algorithm emerges.  Once again, scan for all the red blobs.  Take only blobs with a minimum number of adjacent blue pixels.  Take the center of the largest blob as the center of the circle.  Test rays from the center.  Take all the points where the ray turns from red to blue or off.  Extrapolate the circle dimensions & center as with Marcy 1.

A rough experiment could just scan all the red pixels & skip blob detection, but if someone flies in a room full of red LED's, there's going to be a problem.

 
vision44.jpg
So much for that.  The circle isn't always contiguous.  It's more of a camera limitation than an algorithm flaw.
vision45.jpg


The worst the camera did.  Those red gaps between the blue are also a problem.  All roads lead to another board camera.

vision46.jpg

vision47.jpg


Various errors

vision48.jpg
An ideal outcome.

Then came flashing LEDs & just the red LED.  Flashing was pointless.  The red bleeds too much & the blue has red fringes.  Having just the red LED was promising.

vision49.jpg

 Without paper.

vision50.jpg


With paper.  It unexpectedly showed a hard edge.

Maybe 2 red LEDs would be better.  A new algorithm slowly emerges, in which the largest blob from each frame is detected before the accumulator.  Then, the accumulated blobs are measured.

 



vision51.jpg

Position sensing with this level of detection has already been proven.


Position tracking in ambient light is really starting to gel.  These were hopefully worst case scenarios.  The typical use would be a room with fluorescent lights pointing down from the ceiling or fluorescent lights pointing up through a lampshade.


Blob tracking as opposed to Marcy 1's straight luma keying is definitely required.  The red LED won because there isn't a trend toward replacing CFL's with red LED's like there is with white LED's.  


Red shows pure red.  Blue has a ring of red around it.  A white LED shows blue & red.  Not sure how a room lit with white LED's would handle.  It would definitely take feature detection, with a POV pattern under the wing.  Maybe throwing out blobs with adjacent blue.

Takeoff is a real unknown, requiring the camera to be too close.  A fisheye lens might improve matters.  Those jelly bean lenses might drastically change the camera algorithm.  It just takes 1 week to order anything.

There's basically making it statically stable on the takeoff stand or pointing the camera diagonally, to extrapolate attitude from the mostly unseen disk.




vision52.jpg
The $30 wide angle lens was pretty disappointing.  Still needs 16" of clearance to see the full disk.


After giving up on actively stabilizing the takeoff attitude, because there's no way to determine attitude with the current camera position, a quick spin up showed she could be made passively stable on the takeoff stand & didn't naturally oscillate in a hover.

So the takeoff attitude is stable on the current stand & below the takeoff power.  Before the takeoff, there's a power level at which the attitude is unstable.  Then it takes off, but the camera can't see it until it's pretty high.

Her 1st hover was still a Chinese toy, purely manual throttle & no cyclic.  The ground based vision was able to track the marker LED in ambient light.  All that's needed is functioning camera gimballing & position sensing to finish the flight portion.


Pulling off the takeoff with the current camera position is rough.  There's hard coding a starting throttle, then increasing throttle until the period hits a certain range, then instantly stepping it up to a known takeoff power so it doesn't take off faster than the camera can gain a position lock.

There's just tracking the 1st accumulated blob of minimum size.  When all 4 sides are in frame, begin calculating position.  That only happens after some climbing.

Finally, there's the question of using an alt/az or equatorial camera mount.  The camera is in the center of the flying area & can't break contact to flip around.  The servos only do 180 deg, necessitating an equatorial mount.  An alt/az mount couldn't maintain constant contact if it was orbiting directly overhead.
WEBCAM VS BOARD CAM
A webcam is all but useless & unaffordable.  The actual camera would be a board cam, able to stop down enough to knock out all color saturation.  The exposure must also be long.  It would be much harder to track if there were gaps in the circle.
There are ways to make an IR camera, but you know why those Vicon rooms don't have windows.
The availability of one hung low brand webcams for $3 makes one doubt the viability of $10 board cams. Webcam rolling shutter & scan rate is worse.   They're much bigger than the $10 board cams.  Webcams continue to have very large circuits, in addition to the camera module.  They wouldn't be practical on the aircraft.  The 1 hung low brands don't have enough manual control.

A ground cam with 2 USB cables is impractical.  To manage the number of USB cables, the leading strategy is a ground IR receiver controlled from an airborne IR transmitter.

The problem is you need to send a compass reading from the ground camera.  
webcam01.jpg

Despite the 2 junk webcams in the apartment, they're not useful since the final product is heading towards a 2nd custom board, with a board cam.  It's easier for the servo PWM, magnetometer, & manual camera control to be on 1 chip.

 

 

 

 

DESIGNING AN EQUATORIAL MOUNT
As predicted, a ground based camera gimbal needs a lot of labor & parts to assemble.  The trick is finding the simplest, most compact, uniform parts.  The software for aiming the equatorial mount is also a buster.


vision53.jpg


equatorial05.jpg
 
equatorial06.jpg
 
equatorial07.jpg
 
 
 
vision54.jpg

So the minimal cost equatorial mount ended up a lot bigger than the alt/az mount, even with the micro servos.  The mane reason is the servo  shafts aren't in the middle, so the attachments need to clear a very wide box. 

You'd think servos would have evolved to have centered shafts by now, but the old 1 sided shaft is the most efficient design.  The mount could be smaller, by using more complex parts.


 Next came the most compact equatorial mount, using more unique parts.
equatorial01.jpg
 
equatorial02.jpg
 
equatorial03.jpg
 
equatorial04.jpg
 
It's definitely smaller than the alt/az mount.
 Considering the uninterrupted hemisphere view, it's surprising more antennas don't use equatorial mounts.  The next great task is software to aim it.

Given X & Y in the image, calculate the direction in the ground plane & the servo steps to center the image.  X & Y in the image aren't X & Y to the servos.  It takes serious highschool algebra to convert between image & servo reference.
 
 

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Interesting.

    Have you tried PS3-cams? I think they're called eye-cam and are a pretty good bargain if your looking for computer vision on a budget. Since they are designed to catch the "color balls" they should be a good fit for what you're trying to do I would guess.

This reply was deleted.