New Image Sensor Chips for Robotics and Embedded Vision

I just got back some new silicon! These are the latest image sensor chips I designed specifically for robotics and embedded vision applications. The pictures above show a full wafer followed by a close-up of the wafer from an angle. There are four chips in each reticle- if you look closely you can see them packed into a rectangle (about 8.8mm by 7.0mm). Shortly after that picture was taken, we had the wafer diced up into individual chips and started playing with them!

One of the chips is named “Stonyman” and is a 112 x 112 image sensor with logarithmic-response pixels and in-pixel binning. You can short together MxN blocks (M and N independently selected from 1, 2, 4, or 8) of pixels to implement bigger pixels and quickly read out the image at a lower resolution if desired. The interface is extremely simple- there are five digital lines that you pulse in the proper sequence to configure and operate the chip, and a single analog output holding the pixel value. With two power lines (GND and VDD) only eight connections are necessary to use this chip.

Another chip is named “Hawksbill” and is a 136 x 136 image sensor, also with logarithmic response pixels (but no binning) and the same interface as Stonyman. What is different about Hawksbill is that the pixels are arranged in a hexagonal format, rather than a square format like Stonyman and 99% of other image sensors out there. Hexagonal sampling is not conventional, but it is actually mathematically superior to square sampling, and with recent advances in signal processing one can perform many image processing operations more efficiently in a hexagonal array than a square one.

(Above: 8x8 hex pixel layout from CAD tools, Stonyman chip wire bonded to test board- pardon the dust!)

We plan to release the chips in the near future, with a datasheet, sample Arduino script, and (yes!) a schematic diagram of the chip innards. (If anyone *really* wants one now, I can make an arrangement…)

We are also working on a new generation ArduEye sensor shield with these chips. The shield will be matched to an Arduino Mini for small size, and use a 120MIPS ARM for intermediary processing. The design will be “open”, of course. (Note- anyone who purchased an original ArduEye will get a credit towards the purchase of the new version when it comes out.)

(The thrill of getting new chips back is much like that for circuit boards. You designed it, so in theory you know how it works. But you are never 100% sure and there is no datasheet for you to consult other than your own notes or CAD drawings. You are always slightly afraid of getting a puff of smoke when you first power it. No smoke… the circuit breaker didn’t trigger… so all is good. Then you probe it, verify that different portions work as expected, tweak various settings, and finally get it working. The experience is just like that for a PCB except the stakes are higher.)

Views: 6044

Comment by ionut on August 9, 2011 at 11:40pm
Where do you build these chips?In house?
Comment by Geoffrey L. Barrows on August 10, 2011 at 4:53am

We used X-Fab for these.

This is the first time we used this foundry and I am very happy with the results. One of the other chips on this run is a test chip with various experimental analog circuits. I had trouble getting these circuits to work in the past but they worked beautifully here. 

Comment by Mark Colwell on August 10, 2011 at 6:21am

Your new baby looks very healthy ! Very Cool,  I want one of each !

Comment by Alex Pabouctisids on August 10, 2011 at 10:24am
very cool. what tool did you use for the schematic entry and layout?

- A;ex
Comment by Dano on August 10, 2011 at 10:36am
Comment by jcrubino on August 10, 2011 at 12:12pm
"in-pixel binning", would this benefit low-light applications or just reduce data resolution?
Comment by Geoffrey L. Barrows on August 10, 2011 at 1:17pm

@Mark- on a current ArduEye board (no ARM) or later with the ARM?

@Alex- I used Tanner Tools Pro for schematic, layout, LVS, and Spice

@JCrubino- in-pixel binning is to reduce resolution as well as form virtual pixels whose shapes are other than square. For example if you are placing this on a ground 'bot and the sensor is looking sideways you may only care about information along the horizontal axis and thus bin down using 8x1 rectangles. This gives you one eighth the amount of data to acquire and process, which can speed things up.

You can speed up other basic image processing algorithms using binning. For example, if you wanted to implement your own Wii-mote-like light tracking device, you can first bin using 4x4 blocks to downsample the 112x112 raw array to 28x28 (=784 virtual pixels), then identify which virtual pixels have bright lights, then turn off binning, and then re-grab just those regions of interest to locate the lights to higher precision. We had things like this in mind when adding the binning (which really required just two additional transistors per pixel).

Comment by Mark Colwell on August 10, 2011 at 1:36pm
With ARM is preferred, Nice to have subsystem for processing and comms
Comment by Helldesk on August 10, 2011 at 1:46pm

Very cool. Do you suppose we might see these babies in some component in the DIY Drones store, some day, in the (near) future?

Comment by Alex Pabouctisids on August 10, 2011 at 2:49pm

Very cool, that's the tool i also used in school :).


You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service