I've been experimenting with using an Arduino-powered vision system to detect and locate point light sources in an environment. The hardware setup is an Arduino Duemilanove, a Centeye Stonyman image sensor chip, and a printed pinhole. The Arduino acquires a 16x16 window of pixels centered underneath the pinhole, which covers a good part of the hemisphere field of view in front of the sensor. (This setup is part of a new ArduEye system that will be released soon...)

The algorithm determines that a pixel is a point light source if the four following conditions are met: First, the pixel must be brighter than it's eight neighbors. Second, the pixel's intensity must be greater than an "intensity threshold". Third, the pixel must be brighter, by a "convexity threshold", than the average of it's upper and lower neighbors. Fourth, the pixel must similarly be brighter, by the same threshold, than the average of it's left and right neighbors. The algorithm detects up to ten points of light. The Arduino script then dumps the detected light locations to the Arduino serial monitor.

A 16x16 resolution may not seem like much when spread out over a wide field of view. So to boost accuracy we use a well-known "hyperacuity" technique to refine the pixel position estimate to a precision of about a tenth of a pixel. The picture below shows the technique: If a point of light exists at a pixel, the algorithm constructs a curve using that pixel's intensity and the left and right intensities, then interpolates using a second order Lagrange polynomial, and computes the maxima of that polynomial. This gives us "h", a subpixel refinement value that we then add to the pixel's whole-valued horizontal position. The algorithm then does something similar to refine the vertical position using the intensities above and below the pixel in question. (Those of you who have studied SIFT feature descriptors should recognize this technique.) The nice thing about this technique is that you can get the precision of a 140x140 image for "light tracking" without exceeding the Arduino's 2kB memory limit.

3689442260?profile=original

The algorithm takes about 30 milliseconds to acquire a 16x16 image and another 2 or 3 milliseconds to locate the lights.

The first video shows detection of a single point light source, both with and without hyperacuity position refinement. When I add a flashlight, a second point is detected. The second video shows detection of three lights (dining room pendant lamps) including when they are dimmed way down.

It would be interesting to hack such a sensor with a quadrotor or another robotic platform- Bright lights could serve as markers, or even targets, for navigation. Perhaps each quad rotor could have an LED attached to it, and then the quad rotors could be programmed to fly in formation or (if you are brave) pursue each other.

With additional programming, that sensor could also implement optical flow computations much like I had done in a previous post.

SOURCE CODE AND PCB FILES:

The main Arduino sketch file can be found here: LightTracker_v1.zip

You will still need library files to run it. I've put these, as well as support documentation and Eagle files for the PCBs, in the downloads section of a Google Code project file, located here: http://code.google.com/p/ardueye-rocket-libraries/downloads/list

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @rsabishek- What do you mean by "LDR"?

  • @Sergei Vicon mocap systems probably use a few additional tricks: 1) Synchronizing the image acquisition with the pulsing of the LEDs reduces the amount of ambient light that gets intregrated onto the pixel circuits, and 2) filtering out other wavelengths. We can probably do something like that, albeit more limited, using our Firefly chips.

    I'm glad you mentioned the Wii camera- that is actually what inspired this. I think those cameras also use a form of hyperacuity pixel position refinement.

    The wirebonding is only because we get the chips back in bare die form- for prototyping it can be a bit of a pain (don't drink coffee first!) but that is an old art and very easy to automate for higher quantities. The printed pinhole is not too bad- it can be done in less than a minute and although we use optical adhesive, we have also used generic 5-minute RC epoxy!

    @Anish LOL!

  • Pulsing: this is implemented on modern mocap systems (i.e. Vicon T-series), but you have to do pulsing at a crazy rate to make it work with a moving camera: i.e. something like hardware capture&subtract over a few milliseconds. Vicon has done a good job of showing that it can work outdoors, as long as you avoid sensor saturation.

    This is a very neat demo/video. It reminds me of the Wii control camera (tracks up to 4 objects, I think around 100hz, similar sensor). It sounds like a lot of work is required to get this sensor up and running though (wire bonding, the custom-made pinhole lens?).. 

  • @Geoff if u were a better marketer than engineer you wouldnt have had any time to do anything interesting ;)

  • @Ainsh- I like computer vision. :) What's neat about it is that you can see and visually understand the result.

    Here's a better answer to Ellison's question:

    Basically the main benefit of our approach is that they are field programmable- you can think of them as "field programmable vision sensors" that become whatever you want them to be, based on whatever "app" (e.g. firmware) you program into the processor.

    I seem to have a heck of a time communicating that to people, though. I'm a better engineer than marketer... :)

  • looks like we are collecting enough of computer vision folks :), next step in evolution ...

  • Hi Ellison,

    That is a good question and I get it a lot.

    The ADNS3080 is basically a single purpose sensor that does one thing, and does it very fast. So if you just want to measure optical flow, using a fixed algorithm, and are operating in an environment with adequate light, then you can't beat the ADNS3080. Randy did a good job getting that product out.

    In general, our sensors are *not* as fast, but a lot more flexible. You can fine tune the optical flow algorithm, or even invent a new one, and can program the sensor to generate several optical flow measurements instead of one. You can even do something completely different- the sensor in this post is more similar to a Wii-mote (which tracks bright lights) than an optic mouse sensor, and that is determined by the program on the Arduino (or whatever processor you are using). Our sensors also operate in lower light levels. The main drawbacks are that our sensors are not as fast (an Arduino will not give you kHz frame rates at 16x16!) and less turnkey since you have to program them first before they will do anything.

    Let me know if this answers your question, or if you have more questions.

  • Hey Geoffrey, this is very interesting.  How does you algorithm and camera compare with the ADNS3080 mouse sensor that we're currently using for position hold?  The ADNS3080 has some dsp on it that does the calculations and just outputs a velocity vector.

  • The ADC is an 8-pin DIP and costs maybe $2 or $3 from Digikey, but will give you a good speed boost.

    If you keep the bright pixel in the same location of the visual field, and continue approach it, you will eventually reach it. (This is like the sea-borne rule that you can tell if you are on a collision course with another ship if the other ship does not change it's angular position from your point.)

  • 256 pixels at 10ksps ~40Hz

    as long as we stay well below 40Hz as pulse frequency it might work

    we could look for a long time maybe several seconds at the same spot and look for the blinking light. we know the position relative to t-10s very accurately because of the accelerometer and gyros. then translate the pixel matrix so that pixel 10,5 is always the same spot on the ground even if we move. if the LED behaves like any other semiconductor we could pulse it up to 5W

This reply was deleted.