Blob vision

 
It does work on the test stand, for the most part. There's a lot of lag. There's a dead area directly on the axis of rotation. The range of working ambient light is very narrow. The mane limitation is not knowing where the axis of rotation is in the frame.

 
vision17.png
  The altitude was manely unaffected by bank.  X & Y were pretty bad.  There is an oscillation from the rotation.  If 1 frame per rotation is counted instead of the 3 in which the blob is visible, the oscillation would go away, but fewer data points would be averaged.
  X & Y seemed roughly aligned with magnetic N.  Further alignment would require stabilizing the test attitude.
 

marcy2_49.jpg



The mane event was devising every possible test for position sensing.  Position sensing from a rotating camera & all the trigonometry required was too crazy to believe it would work.  There were a lot of electronic dongles hanging in mid air.  Crashing would get expensive.

When Marcy 2 was originally conceived, there was full confidence that everything currently on the air frame was physically enough to do the job.  Confidence wanes when it's built up & sitting on the test stand.

The leading technique was the old ceiling hung wire, but it didn't constrain bank angle.  As expected, the bank angle drastically impaired position sensing.  When level, the azimuth correlation & blob detection seemed to work.

There were a lot of obvious glitches which the deglitching couldn't handle.  The camera detected noise as the target when pointing away from the target.

Thus, the error prone test of a minimum blob size was required.  Manely, the largest blob in the last revolution was taken.  Then, all blobs below half its size were excluded.  A blob greater than 1/2 the maximum could sneak in when the camera was pointed away, for which deglitching would be required.  A real paranoid filter could take only the largest blob from the last revolution.

She already has trouble differentiating from the Heroine Clock.  This is the reality of machine vision.

marcy2_50.jpg

The quest for a more robust electronics arrangement continues.  The mane board can be repaired, but the wifi card has a $5 tag & works better near the axis of rotation.

Fly & crash she did.
vision18.jpg
 Revealing no useful position information during the flight.  The takeoff attitude hold was so accurate, it kept the target halfway off the screen during the complete flight. All the energy getting the camera to view the axis of rotation didn't get enough of it in view to give a target.
  The attitude hold stays active until the takeoff altitude is reached, just like ground based vision.  Only then does it switch to position hold.  It's probably acceptable if the takeoff altitude is low.
The current blob won't do.
vision19.jpg

vision000100.jpg

Then there were 2 markers.  Sunlight caused some blue in the chromatic aberration. 

Having 2 markers creates a lot of possibilities.  The overlap of the 2 markers can be used to throw out false blobs, but also eliminates some real blobs.  The distance between the 2 markers & size of the 2nd marker can give a better distance measurement, but the distance is affected by rolling shutter.  The blue marker can be visible sooner in the takeoff.


You can see how more targets could be added & detected based on proximity.  Then, the position could be even more refined.

Of course, blue immediately showed the same horror glitches it did before.  Light blue might work better, but it's not mass produced.  Maybe if all blobs were thrown out that weren't a red & blue next to each other.

That was disappointing, but there's hope some simple shape detection is possible or some redundant marker can work in the 1st moments of takeoff.  Such a diabolically complicated system like that brings to mind the idea of detecting position from a partially obscured blob.

The radius & center can be derived from the dimensions of the rectangle.  But that also depends on rolling shutter.

A 2nd pink circle would be hard to separate from the 1st.  Enough testing could define a minimum distance to resolve the 2.  Then they could be mounted on a blue background, guaranteed to not generate a false positive.  How would it know where the maximum distance between the 2 blobs ended & ambient noise began?

It could measure the size of the small blob & compare it to the distance from the big blob. 

vision20.jpg

The small blob isn't big enough to get any size & if only the small blob is visible, it'll trigger a false positive.  The scanline compression takes away a lot of photons.  Some rough, procedural shape detection may be the only way.


vision21.jpg






A line was also worthless.   At flight speed, it takes a lot of photons to trigger the mask.

vision22.jpg



Blob matching based on overlapping boundary boxes of the red & blue got rid of some pretty significant blue noise.  Boundary boxes aren't as robust as scanning every red pixel for a neighboring blue, but faster.

vision24.jpg


The final attempt in this red/blue combination was scanning every red blob pixel for adjacent blue pixels & taking the largest blue blob.  This was bulletproof at separating the red/blue marker from the noise.

It was a leap in intelligence, actually detecting details in a noisy image from a spinning camera, with complicated rules for switching to the blue blob when the red blob was obscured & throwing out all red blobs without an adjacent blue blob, except during takeoff. 

vision25.png






Surprisingly, the blue & red blobs seemed to give very consistent results, right down to Y offset between blobs.


In the worst case, 1 original plan if the camera couldn't look straight down was to have the red off center for all position sensing & the blue for the takeoff leveling with a new requirement to not hover directly over the red.

vision26.jpg


Another test flight & another crash as the denoising algorithm throws out good data.  If the aircraft rises too fast & the blobs get small too fast, they get thrown out for being too small.

There were only 20 good frames it could have used during the takeoff, if it worked.   


Anyways, started thinking more about tethered power.  The 10 minute flight time & cost of crashing batteries over the years is such a turnoff, it makes you think surely the infinite flight time of a tethered system can outweigh the drawbacks.

It's been sold before as a finished product, with limited results.  Maybe it was sold to the wrong customers.  Someone interested in a flying camera for photographing on a closed set would be better off with a tethered system.  Hobbyists are manely interested in hovering a camera in a stationary location.

The mane limitation of all FPV videos is they have a very limited horizontal range, beyond which they always have to turn around.  It's hardly enough justification for batteries.

The ideal tethering system has an insulated wire for V+ & uninsulated wire for ground.  The wires are as thin as possible.  The voltage on the ground is much higher than the motor voltage, to compensate for resistance.  The motors are wound for very high voltage & low current.

The monocopters can't be tethered.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Moderator

    @Jack, 

    Did you try YCbCr color space? You could limit BlueMin < Cb < BlueMax and RedMin < Cr < RedMax to detect each markers. Since Y (illumination) is ignored, above inequalities would detect the markers in different illuminations. 

    What do you use for processing? a small single board computer?

  • If mono copters can't be tethered then why was yours? hmmm? Do you have a sky ceiling hook?

    Seriously though, really cool stuff Jack. When the new gen 3 ardupilots come out, there will be lots of processor time to fill.

This reply was deleted.