User programmable sub-gram optical flow / vision sensor

For a long time I've been wanting to make an ultra minimalist vision / optical flow sensor for the hobbyist and experimentalist community. I've been pursuing this as a small IR&D pet project at Centeye. We're almost there.

The above photo shows one of these sensors next to a millimeter scale. The part count is small- One of our 64x64 custom image sensors, an Atmel ATmega644 processor, several resistors and capacitors, and some lightweight flat optics we developed. Two complete sensors are shown, including with mounted optics (yes it's that thin!). Total mass is about 440mg. The primary interface is via I2C/TWI, which will allow many sensors to be hooked up to a common bus. A secondary connector includes the interface with the ISP for uploading firmware.

We chose to use an ATmega processor since they are loved by hardware hackers and are easy to use. Ideally for a single sensor, one can upload any number of different "application firmwares" to the sensor to make it whatever one wants, limited by just the processor and the base resolution. One firmware will turn it into an optical flow sensor . Another firmware will let it track bright lights. Yet another firmware could turn it into something else. Or someone could write their own firmware, whether by tweaking existing source code (yes I plan to share it) or writing something completely new.

An ATmega644 may not sound like much for image processing- 64kB flash, 4k SRAM, 2k EEPROM, 20MHz max. Neither does a 64x64 array. But the reality is if you are witty you really don't need at lot of resolution or processing power to get some nice results. (We once did an altitude hold demo with just 16 pixels an 1MIPS back in 2001.)

We've already made our first batch of these (about 20) and handed them out to a few close collaborators. Based on feedback we are preparing our second run. The new sensors will be slightly larger and heavier (thicker PCB) but more rigid, and use strictly 0.1" headers for all IO and power (including programming). Mass should still be under a gram.

We also have an even smaller version in the works, shown below with a chip mounted and wire bonded (sorry about the mess). This board uses ATtiny and the 7mm x 8mm board alone weighs about 95mg. I think we can get a whole sensor made for about 120mg, if only I had the time! (Maybe some brave person here would like to take a stab at programming it???)

Views: 2736

Comment by Ravi Gaddipati on July 9, 2010 at 3:47am
That would be cool, perhaps its worth setting up a site? A couple thousand is a lot for a hobbyist, but if the cost was shared, perhaps more people would get into chip design.

Do you have any suggestions of where to learn how to design these things?
Comment by Geoffrey L. Barrows on July 9, 2010 at 7:04am
I think it would be worth setting up a site! Though there may actually already be such sites, but they are probably meant more for dedicated chip designers as opposed to people who want to do things and, by the way, design chips as well.

I am mostly self-taught when it comes to learning to design chips. Professor Emeritus Carver Mead of Caltech had a research group (Physics of Computation) that in the1980's made all sort of nifty vision chips. This became a field called "neuromorphic engineering". That group had a web page describing how to get started etc. However Prof. Mead retired from Caltech years ago (and is now a serial entrepreneur) and that group has long since disbanded and the site taken down. That was 10 years ago. But then his graduate students went on to do all sorts of things, many of them becoming professors at institutions all around the world. One hot bed of research in this area is the Institute for Neuroinformatics in Zurich. Maybe you can go there for grad school. You can also join the on-line Institute of Neuromorphic Engineering which is associated with the Zurich group but has members all around the world (including me).

If you want a taste of what this is like, then I recommend getting the book "Analog VLSI and Neural Systems" by Carver Mead. Your college library probably has it. It is old (1989) but is a good introduction. There is also Analog VLSI: Circuits and Principles by Shih-Chii Liu (one of Carver Mead's students now at INI in Zurich) which is more recent. Another book is "vision chips" by Alireza Moini. There are other books as well but I cannot remember them off hand.

Here's something I learned- Doing basic chip design is not difficult- it is like designing a circuit board, except that 1) you use the different layers to form transistors, resistors, and capacitors, 2) just like with PCBs, there is a set of design rules and good practices that need to be learned, and 3) it takes more time and costs more money to fab a chip. I've had smart high school students work for me for a summer and finish up with a chip design that worked.

Setting up a way for hardware hackers to design chips is a fascinating idea. In my opinion, a hardware hacker working on a project, and then designing a simple chip to help with that project, in many cases will design a much more useful chip than someone doing nothing but making chips. This vertical integration is one reason we've been able to do what we do.
Comment by Ravi Gaddipati on July 22, 2010 at 12:40pm
I was going to email you, but I figured I would post here for public benefit.

I have been thinking about the programming aspect of these, and I am wondering if it is actually very benifical to have multiple sensors. One pointing forward will be able to detect every degree of freedom, the most difficult being groundspeed, and the easiest being pitch and yaw, roll being in the middle somewhere. I can see the benefit of multiple sensors, but what is the magnitude of that benefit?
Benefit's I see:
-multitudes of points
-no porportional measurments (like measuring forward speed with a forward facing sensor)
-"Environmental awareness"

On a pictore on your website, you say the chip will incorperate feature recognition (e.g Edge at 8,36). Is that implemented yet? Or did you transition that to the MCU?

again, Great Job
Comment by Aurelio R. Ramos on January 29, 2011 at 8:08pm

I realize this thread is pretty old but I thought I would add to the discussion. I've been intrigued by the ADNS 2620 sensor (it is meant to be used for optical mice) and I wonder, how hard would it be to mount one right behind a board camera lens mount and lens, illuminated with some IR LED's... My main concern is that the mouse sensor usually has full control of illumination and without an iris perhaps the sensor might saturate in daylight, or maybe the required illumination would be impractical... But still, if it only worked at a few feet over the ground, I can see this as a useful addition for position holding in indoor applications. Has anybody tried this? I am going to do some experiments in the coming weeks.



Comment by Geoffrey L. Barrows on January 30, 2011 at 12:00pm

I've never played with any of the ADNS or similar sensors. I am guessing that you may have trouble generating enough light with the LEDs to get useful measurements except for short distances. Basically if you are illuminating an environment, to increase the operating range by d you need to increase the output light by d^2. So going from a few centimeters (let's say 2cm) to a 1m range, you need 50^2 = 2500 times the light coming from the LEDs. Either that or you need the sensor to be 2500 as sensitive, or you can get that 2500 number through a combination of more light and higher sensitivity (and better optics too.) Now if you can slow down the integration period of the sensor, you may be able increase the light sensitivity a bit, but I haven't tried that.


We've done a number of experiments with "hover in place" in the dark- I will be posting that in the future- but it took a lot of effort pull the image out of the background.


Still it's worth a try- First I'd recommend using the brightest LEDs you can find, and make sure you can source a lot of current. Get LEDs with a narrow beam- say 5 deg or 10 deg. Then use a lens with a longer focal length. This will help you squeeze the most out of the sensors.

Comment by Geoffrey L. Barrows on January 30, 2011 at 12:01pm
i.e. use a beam LED to project that light as far as you can, and then choose a lens that maps the LED's illumination pattern onto the sensor. Put the two close together. You'll have to do some aiming.
Comment by Gord Likar on January 30, 2011 at 1:20pm
Great project fellows!  Wondering if a small camera flash could be used rather than LED?  Flash it periodically and take a sample.
Comment by Geoffrey L. Barrows on January 30, 2011 at 2:04pm
If you have a way to synchronize the "electronic shutter" (if any) in an image sensor, then you could flash a camera flash or even pump a huge pulse of current through a bank of LEDs for the same effect. (The latter might actually be better weight-wise.)

Comment by Randy on January 30, 2011 at 4:52pm


     A couple of people have done the ADNS2620 mouse thing:

     Marko did it first here on diydrones:

     ..and I repeated his results recently using slightly different hardware:


     I haven't seen the ADNS2620 saturate in daylight but it definitely has problems when the light gets low although slowing down the update rate to it's minimum of 367hz helps a lot.


     There's a better sensor as well, the ADNS3080 which I haven't tried bet but it has better resolution and faster update speed.  I don't know how it will act in low light conditions yet though.


You need to be a member of DIY Drones to add comments!

Join DIY Drones


Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service