3689419257?profile=original3689419121?profile=originalI just got back some new silicon! These are the latest image sensor chips I designed specifically for robotics and embedded vision applications. The pictures above show a full wafer followed by a close-up of the wafer from an angle. There are four chips in each reticle- if you look closely you can see them packed into a rectangle (about 8.8mm by 7.0mm). Shortly after that picture was taken, we had the wafer diced up into individual chips and started playing with them!

One of the chips is named “Stonyman” and is a 112 x 112 image sensor with logarithmic-response pixels and in-pixel binning. You can short together MxN blocks (M and N independently selected from 1, 2, 4, or 8) of pixels to implement bigger pixels and quickly read out the image at a lower resolution if desired. The interface is extremely simple- there are five digital lines that you pulse in the proper sequence to configure and operate the chip, and a single analog output holding the pixel value. With two power lines (GND and VDD) only eight connections are necessary to use this chip.

Another chip is named “Hawksbill” and is a 136 x 136 image sensor, also with logarithmic response pixels (but no binning) and the same interface as Stonyman. What is different about Hawksbill is that the pixels are arranged in a hexagonal format, rather than a square format like Stonyman and 99% of other image sensors out there. Hexagonal sampling is not conventional, but it is actually mathematically superior to square sampling, and with recent advances in signal processing one can perform many image processing operations more efficiently in a hexagonal array than a square one.

3689419206?profile=original3689419277?profile=original

(Above: 8x8 hex pixel layout from CAD tools, Stonyman chip wire bonded to test board- pardon the dust!)

We plan to release the chips in the near future, with a datasheet, sample Arduino script, and (yes!) a schematic diagram of the chip innards. (If anyone *really* wants one now, I can make an arrangement…)

We are also working on a new generation ArduEye sensor shield with these chips. The shield will be matched to an Arduino Mini for small size, and use a 120MIPS ARM for intermediary processing. The design will be “open”, of course. (Note- anyone who purchased an original ArduEye will get a credit towards the purchase of the new version when it comes out.)

(The thrill of getting new chips back is much like that for circuit boards. You designed it, so in theory you know how it works. But you are never 100% sure and there is no datasheet for you to consult other than your own notes or CAD drawings. You are always slightly afraid of getting a puff of smoke when you first power it. No smoke… the circuit breaker didn’t trigger… so all is good. Then you probe it, verify that different portions work as expected, tweak various settings, and finally get it working. The experience is just like that for a PCB except the stakes are higher.)

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @Helldesk- short answer- Ideally yes, if it would interest people.

    "How Soon" and "How Much depend on what is actually sold- a chip wirebonded to a breakout board would be pretty easy to put out there and inexpensive, but would require another processor. The ArduEyeII prototype we have in the works, with an ARM, would give more processing oomph and isolate the casual user from heavy duty image processing, but will take a bit more. The good news is we have a breadboard version of the ArduEyeII working, using an older chip. We plan to have at-scale prototypes made in a few weeks.

    @Alex- that's great! I got started in school with Magic. (I actually designed chips from 2000 through 2004 exclusively using exclusively open source software and simulation tools, running on Linux!)

    @Randy- Very good question- I don't know yet. We still don't know how much it will cost to produce in quantity.

  • Developer

    Looking good.  I think using an ARM is a great choice.  Atmel's are great to work with because of the Arduino support but for the high-power you need for image processing, ARM is probably the way to go.

     

    Time permitting, I think we could add support for the sensor into ACM.

     

    Geoff, any idea on how much the sensor will end up costing?

  • I am most definitely interested in one of these.
  • Very cool, that's the tool i also used in school :).

  • Very cool. Do you suppose we might see these babies in some component in the DIY Drones store, some day, in the (near) future?

  • Developer
    With ARM is preferred, Nice to have subsystem for processing and comms
  • @Mark- on a current ArduEye board (no ARM) or later with the ARM?

    @Alex- I used Tanner Tools Pro for schematic, layout, LVS, and Spice

    @JCrubino- in-pixel binning is to reduce resolution as well as form virtual pixels whose shapes are other than square. For example if you are placing this on a ground 'bot and the sensor is looking sideways you may only care about information along the horizontal axis and thus bin down using 8x1 rectangles. This gives you one eighth the amount of data to acquire and process, which can speed things up.

    You can speed up other basic image processing algorithms using binning. For example, if you wanted to implement your own Wii-mote-like light tracking device, you can first bin using 4x4 blocks to downsample the 112x112 raw array to 28x28 (=784 virtual pixels), then identify which virtual pixels have bright lights, then turn off binning, and then re-grab just those regions of interest to locate the lights to higher precision. We had things like this in mind when adding the binning (which really required just two additional transistors per pixel).

  • "in-pixel binning", would this benefit low-light applications or just reduce data resolution?
  • Outstanding!
  • very cool. what tool did you use for the schematic entry and layout?

    - A;ex
This reply was deleted.