Geoffrey L. Barrows's Posts (26)

Sort by

Centeye Modular Vision Sensors

8529205664?profile=RESIZE_710xIt has been awhile since I’ve posted anything about recent Centeye hardware. Some of you may remember my past work implementing light weight integrated vision sensors (both stereo and monocular) for “micro” and “nano” drones. These incorporated vision chips I personally designed (and had fabricated in Texas) and allowed combined optical flow, stereo depth, and proximity sensing in just 1.1 grams. Four of these on a Crazyflie were enough to provide omnidirectional obstacle avoidance and tunnel following in all ambient lighting conditions. Their main weakness was that they were time-consuming and expensive to make and adapt to new platforms. Some of you may also remember our ArduEye project, which used our Stonyman vision chip and was our first foray into open hardware. Although that project had a slow start, it did find use in a variety of applications ranging from robotics to eye tracking. I have discussed, privately with many people, rebooting the ArduEye project in some form.

Like many people we faced disruption last year from COVID. We had a few slow months last Summer and I used the opportunity to create a new sensor configuration from scratch that has elements of both Ardueye and our integrated sensors. My hypothesis is that most drone makers would rather have a sensor that was modular and easy to reconfigure or adapt, or even redesign, and are OK if it weighs “a few grams” rather than just one gram. Some users even told me they prefer a heavier version if it is more physically robust. Unlike the nano drones I personally develop, if your drone weighs several kilograms, an extra couple grams is negligible. I am writing here to introduce this project, get feedback, and gauge interest in making this in higher quantities.

My goals for this “modular” class of sensors were as follows:

  • Use a design that is largely part agnostic, e.g. does not specifically require any one part (other than optics and our vision chip) in order to minimize supply chain disruptions. This may sound quaint now, but this was a big deal in 2020 when the first waves of COVID hit.
  • Use a design that is easy and inexpensive to prototype, as well as inexpensive to modify. We were influenced by the “lean startup” methodology. This includes making it easier for a user to modify the sensor and it’s source code.
  • Favor use of open source development platforms and environments. I decided on the powerful Teensy 4.0 as a processor, using the Arduino framework, and using Platform IO as the development environment.

We actually got it working. At the top of this post is a picture of our stereo sensor board, with a 5cm baseline and mass of 3.2 grams, and below is a monocular board suitable for optical flow sensing that weighs about 1.6 grams. We have also made a larger 10cm baseline version of the stereo board, and have experimented with a variety of optics. All of these connect to a Teensy 4.0 via a 16-wire XSR cable- The Teensy 4.0 operates the vision chips, performs all image processing, and generates the output. We have delivered samples to collaborators (as part of a soft launch) who have indeed integrated them on drones and flown them. Based on their feedback we are designing the next iteration.

8529205301?profile=RESIZE_710x

As with any new product you have to decide what it does and what it does not do. Our goal was not to have an extremely high resolution- those already exist, and the reality is that having a high resolution has other costs in terms of mass, power, and light sensitivity. Instead, we sought to optimize intensity dynamic range. The vision chips use a bio-inspired architecture in which each pixel individually adapts to its light level independent of other pixels. The result is a sensor that can work in all light levels (“daylight to darkness”, the latter with IR LED illumination), can adapt nearly instantaneously when moving between bright and dark areas, and function even when both bright and dark areas are visible.

Below shows an example of the stereo sensor viewing a window that is open or closed. (Click on the picture to see at native resolution.) The current implementation divides the field of view into a 7x7 array of distance measurements (in meters) which are shown. Red numbers are those measurements that have passed various confidence tests; cyan numbers are those that have not (thus should not be used for critical decisions). Note that when the window is open, the sensor detects the longer range to objects inside even though the illumination levels are about 1% that of outside. A drone with this sensor integrated would be able to sense the open window and fly though it, and not suffer a temporary black-out once inside.

8529205888?profile=RESIZE_710x

A more extreme case of light dynamic range is shown in the picture below. This was taken with a different sensor that uses the same vision chip. On the top left is a picture of the sensor- note that it was in the sunlight, thus would be subject to the “glare” that disrupts most vision systems. On the top right is a photograph of the scene (taken with a regular DSLR) showing sample ranges to objects in meters. On the bottom is the world as seen by the sensor- note that the Sun is in the field of view at the top right, yet the objects in the scene were detected. Other examples can be found on Centeye’s website.

8529207066?profile=RESIZE_710x

We are currently drafting up plans for the next iteration of sensors. For sure we will be including a 6-DOF IMU, which will be particularly useful for removing the effects of rotation from the optical flow. We are also envisioning an arrangement with the Teensy 4.0 placed nearly flush with the sensor for a more compact form factor. There is still discussion on how to balance weight (less is better) with physical robustness (thicker PCBs are better)! Finally I am envisioning firmware examples for other applications, such as general robotics and environment monitoring. I am happy to discuss the above with anyone interested, private or public.

Read more…

5334381292?profile=originalFigure 1: (left) Foam drone with optical flow sensor mounted under a wing, Summer 2001. (right) Foam drone with optical flow sensors for attempted obstacle avoidance, Summer 2002.

Today it is universally acknowledged that drones operating close to the ground need some sort of obstacle avoidance. What I am about to tell is the story of our own first attempt at putting obstacle avoidance on a drone, during the years 2001 through 2003, using neuromorphic artificial insect vision hardware. I enjoy telling this story, since the lessons learned run contrary to even current practice but, upon reflection, should be common sense. I learned an important lesson that is still relevant today: The best results are obtained with a systemic approach in which the drone and any obstacle avoidance hardware are holistically co-designed. This contrasts with the popular approach in which obstacle avoidance is a modular component that can be simply added to a drone.

Initial Successes with Altitude Hold

As discussed in my last post, by taking inspiration from the vision systems of insects, we implemented interesting vision-based behaviors on drones using only several hundred pixels. Nineteen years ago, in 2001, we built an optical flow sensor using a 24-pixel neuromorphic vision chip and a modest 8-bit microcontroller. We mounted the sensor on the bottom of one wing of a fixed-wing model airplane to view the ground (Figure 1 left, above). The sensor was programmed to measure front-to-back optical flow and from that estimate the airplane’s height above ground. We implemented a simple proportional control rule that would increase the aircraft’s throttle or elevator pitch as optical flow increased. The system worked quite well- once the aircraft was set to the target height, it would fly for as long as the battery lasted (and wind conditions permitted). The sensor only operated the throttle or elevator- the human operator would still steer the aircraft with the rudder via a standard RC control stick. We verified the ability of the sensor to ascend or descend gentle slopes. We even later achieved similar results over unbroken snow on a cloudy day!

You can see one of the videos here, which I shared previously: https://youtu.be/EkFh_2UX-Jw You’ll hear a tone in the video. Our telemetry was laughably simple- The sensor transmitted this tone by on-off keying an RF module at a rate that varied with optical flow. The camcorder operator carried a scanner (set to AM mode) that received and audibly outputted that tone, allowing it to be recorded with the video of the flight.

I look back fondly at the scrappiness of our approach! I personally designed the vision chip using free and open source tools (Magic and Spice) on a $1000 laptop running Linux. I then had the chip fabricated by the MOSIS service for $1200. Our most expensive piece of lab equipment was a PIC microcontroller emulator that I think cost me about $2500 at the time. I find it interesting that a “fabless semiconductor” startup today “needs” tens of millions of dollars to start, but that is a topic for another post. Going from the design of the vision chip (December 2000) to the first successful flight (July 2001) took eight months, much of it learning how to build and control the model airplane!

Failed Attempts at Obstacle Avoidance

After getting “altitude hold” under control, I decided to next tackle obstacle avoidance, again using optical flow. I drew inspiration from the work of noted biologist (and MacArthur grant winner) Michael Dickinson- His laboratory then recently recorded the 3D flight paths made by fruit flies in a chamber and then analyzed how the flight path turned in response to an obstacle, in this case the wall of the chamber (Figure 2). Not surprisingly, the flies tended to travel in straight lines until they were adequately close to the wall, at which point they would execute a sharp turn away. An analysis of the flight paths and the environment showed that the turn reliably happened whenever the optical flow due to the approaching wall crossed a threshold. The fly would then resume forward flight in the new direction until another part of the wall was reached.

5334381654?profile=originalFigure 2: 3D flight paths made by fruit flies in a confined arena. Apologies for the poor image quality.

Back in 2001, this was a behavior that I thought could be implemented with a few optical flow sensors. My first attempt had just two optical flow sensors mounted on the aircraft to view diagonally forward-left and forward-right. The controller was programmed to detect if the optical flow was high on one side, at which time the controller would turn the aircraft’s rudder to steer away for a fixed duration. Here is a video clip of an early test: https://youtu.be/Ao3rhiQR0BM In this case the tone indicated the rudder actuation- a middle pitch indicated neutral e.g. no actuation, while a high or low tone indicated an attempt by the rudder to turn one direction or the other. It was comical- the optical flow sensor picked up the tree and initiated a turn, but not soon enough to avoid a collision!

We spent almost two years trying to improve that demonstration. I designed improved neuromorphic vision chips (with 88 pixels), then we made lighter yet more powerful versions of the optical flow sensors. We even experimented with different optical flow algorithms by varying the firmware on the microcontroller while still using the same vision chip as a front-end. Figure 1, above right, shows one iteration. It had four sensors- one downward to control height and three forward to detect obstacles. We added a proper telemetry downlink that allowed us to record and display data such as optical flow measurements, aircraft yaw rate, and control responses. We still did not achieve improved obstacle avoidance. We did, however, produce a graphical display showing three beautifully hilarious sequential events: First the increased optical flow due to the looming tree, then the control response by the rudder, and finally a sharp spike in the yaw rate as the aircraft slammed into the tree! The only thing that was “improved” was the monetary value of the electronics we had to fish out of a tree after each crash…

Eureka!

It is human nature, when trying to solve a difficult problem, to go down the same general path and bang your head against a dead end until you accept that you are indeed at a dead end and need a different approach. In my case, the epiphany came when I took a second look at the flight paths recorded by Prof. Dickinson. I saw something that I had missed before. Sure, the flies flew a straight line and then made sharp turns upon detecting an obstacle. What I had missed, however, was the sharpness of the turn- the arc flown during the turn had a radius of a few centimeters. More significantly, the turn was made about ten centimeters away from the wall. I imagined a parameter “R”, which might be the “turning radius” of the insect (Figure 3). A “turning radius” is not a native measure of the maneuverability of a flying insect or drone in the same way that it is for a ground vehicle. However, it does serve as a first-order approximation. I then imagined a parameter “D”, which might be the size of an obstacle to avoid or the distance from an obstacle at which the drone or insect would turn to avoid it.

5334381678?profile=originalFigure 3: D/R ratio

I then realized that what is critical is the ratio of the two: D/R. This ratio is basically a normalization distance relative to maneuverability. In the fruit fly case, D/R was perhaps 10 cm divided by a few centimeters, or a value of around two to four. Now consider the aircraft we had been using- it was a foam “park flyer” type designed to be easy to fly and control, using only a rudder for steering, and thus by design had a limited maneuverability. Furthermore, we loaded it up with sensors, telemetry, and other support electronics which further reduced its ability to turn. It’s R value was perhaps 20 meters if not more! To achieve a similar D/R of two to four, it would need to start avoiding the tree at a distance of 40 to 80 meters or more. At that distance, the small tree we were trying to avoid was simply too small in the visual field for our sensors to detect. At that turning radius, our implementation was better suited to avoid a large building, mountain, or cliff, rather than a single tree.

I was eager to achieve a successful demonstration of obstacle avoidance. Our sponsor at the time (DARPA) made it clear they required it. When I had the insight of D/R, I realized what we needed to do: We needed to boost the D/R of our system, and the most direct way was to find ways to decrease R e.g. make the aircraft much more maneuverable. First, we found a way to shave a few grams off the sensor mass while modestly improving their performance. We then simplified the control and support electronics- We took out the telemetry downlink to a human operator, built a simpler controller board, and found lighter cabling. Finally, we scratch-built a new model aircraft from balsa wood, pine, and foam. The resulting aircraft (Figure 4) was smaller, at half the wingspan, and at least an order of magnitude lighter. It was ugly, inefficient, and made any aeronautical engineer who looked at it cringe. But it had what I, a chip designer by training but perhaps armed with a fundamental grasp of physics, realized would make the aircraft turn- a giant rudder!

5334381484?profile=originalFigure 4: Aircraft with improved D/R used to demonstrate obstacle avoidance

After honing the aircraft design, it took just a few weeks of tuning to get it to avoid a tree line. The result is this video that I shared previously: https://youtu.be/qxrM8KQlv-0

Lessons Learned

So, was it possible almost two decades ago to provide a drone with obstacle avoidance using the technology available then? Absolutely. In fact, we probably could have done this in 1990s or even in the 1980s if we knew then what we know now. But it would not have been accomplished merely by adding obstacle avoidance to the drone. It would have also required designing the drone to support obstacle avoidance.

There are lessons to be learned from that demo that I still carry with me. First, and most important, is that it is necessary to take a systemic approach when implementing obstacle avoidance on a drone. Such a systemic approach will yield insights that would be missed if you look at individual components. In my case from 17 years ago, taking a systemic approach gave me the insight that successful obstacle avoidance required increasing D/R, and the most direct route at the time was to decrease R rather than increase D, in other words make the platform more maneuverable rather than redesign the sensor.

A second lesson learned is that very often “less is more”. There is a benefit to simplicity and eliminating excess that is often lost in practice. I am reminded by a saying attributed to the great automatic engineer Colin Chapman of Lotus- more power makes a race car faster on straight roads while more lightness makes a race car faster everywhere! I think this is an easy lesson to understand, but tough to incorporate because it runs contrary to habits we have developed in society- Pedagogy and society both reward people for working excessively hard and going through the motions to implement a complex solution rather than taking time to identify and implement one that is simpler and more elegant.

I will discuss, in another post, observations and lessons learned from more recent work providing small drones with obstacle avoidance, including whether it is even feasible at the current time for a modular approach. For now, I am curious to learn if others have had similar experiences to the above. Thank You for reading!

Read more…

5334388859?profile=original

(Photo by Dustin Iskandar, CC BY 2.0)

My long-time interest has been in developing insect-type vision for small drones. Properly implemented, I believe that such vision hardware and algorithms will allow small drones to fly safely amidst clutter and obstacles. Of course, I have been following with great interest other approaches to providing autonomy to small drones. Like almost everyone else, I am in awe at the achievement of Skydio- nice work!

My own work, though, has been to implement similar types of autonomy on much smaller “nano” scale platforms. These are tiny sparrow-sized drones that can fit in your hand and weigh just tens of grams. This is well under the 250-gram threshold the FAA uses to determine if a drone needs to be registered, and certainly much smaller than most commercially available drones. Nano drones have some fantastic advantages- they are small, stealthy, easy to carry, and (in my opinion) inherently safe. They also fit into much tinier spaces than larger drones and can thus get closer to objects that might be inspected.

That small size, however, does not lend itself well to carrying lots of on-board processing, say from a GPU single-board computer. The entire mass of one of my drones (vision and all) weighs less than available GPU boards (especially once you add the required heat sink!). Until single-digit gram GPU modules (inclusive of everything but the battery) are available, I am stuck with much more modest processing levels, say from advanced microcontrollers capable of hundreds of MIPS (million instructions per second) rather than the Teraflops you can get from GPUs. Given most contemporary approaches to vision-based autonomy use VGA- or similar-resolution cameras to acquire imagery and GPUs to process this imagery, you might think implementing vision  on a nano drone is not feasible.

Well, it turns out implementing vision on nano drones is quite doable, even without a GPU. In past work, I’ve found you can do quite a lot with just a few hundred to a few thousand pixels. I’ll get into examples below, but first let’s consider what types of solutions flying insects have evolved. If you begin a study on insect vision systems, you will notice two things right away- First, insect vision systems tend to be omnidirectional. Their only blind spots tend to be directly behind them, blocked by the insect’s thorax. Second, insect vision systems have what we would think of as a very low resolution. An agile dragonfly may have about 30,000 photoreceptors (nature’s equivalent of pixels), almost three orders of magnitude less than the camera in your smart phone. And the lowly fruit fly? Only about 800 photoreceptors!

How do insects see the world with such low resolution? Much like contemporary drones, they make use of optical flow. Unlike contemporary drones, which generally use a single camera mounted on the bottom of the drone (to measure lateral drift or motion), insect vision systems measure optical flow in all directions. If the optical flow sensing of a contemporary drone is analogous to one optical mouse looking down, an insect vision system is analogous to hundreds or thousands of optical mice aimed different directions to cover every part of the visual field.

The vision systems of flying insects also include neurons that are tuned to respond to global optical flow patterns that, it is believed, contribute to stability and obstacle avoidance. Imagine yourself an insect ascending in height- the optical flow all around you will be downward as all the objects around you appear to descend relative to you. You can imagine similar global optical flow patterns as you move forward, and yet others as you turn in place, and yet other expanding patterns if you are on a collision course with a wall.

Another trick performed by insects is that when they fly, they make purposeful flight trajectories that cause optical flow patterns to appear in a predictable manner when obstacles are present. Next time you are outside, pay attention to a wasp or bee flying near flowers or their nest- you will see they tend to zig-zag left and right, as if they were clumsy or drunk. This is not clumsiness- when they move sideways, any objects in front of them makes clear, distinct optical flow patterns that not only reveals the presence of what is there but it’s shape. They do this without relying on stereo vision. Essentially you can say flying insects use time to make up for lack of spatial resolution.

Such purposeful flight trajectories can be combined with optical flow perception to implement “stratagems” or holistic strategies that allow the insect to achieve a behavior. For example, an insect can fly down the center of a tunnel by keeping the left and right optical flow constant. An insect can avoid an obstacle by steering away from regions with high optical flow. Biologists have identified a number of different flight control “stratagems”, essentially holistic combinations of flight paths, resulting optical flow patterns, and responses thereto that let the insect perform some sort of safe flight maneuver.

In my work over the past two decades, I have been implementing such artificial insect vision stratagems including flying them on actual drones. At Centeye we took an integrated approach to this (the subject of a future article)- We designed camera or “vision chips” with insect-inspired pixel layouts and analog processing circuitry, matching lenses, and small circuit boards with processors that operate these vision chips. We then wrote matching optical flow and control algorithms and tested them in flight. Nobody at Centeye is just a hardware engineer or just a software engineer- the same group of minds designed all the above, allowing for a holistic and well-integrated implementation. The result was robust laboratory-grade demonstrations of different vision-based control behaviors at known pixel resolutions and processor throughputs. See the list below for specific examples. This list focuses on early implementations, most from a decade or more ago, to emphasize what can be performed with more limited resources. We also include one more recent example using stereo vision, to show that even stereo depth perception can be performed with limited resolution.

Altitude hold (2001), 16-88 pixels, 1-4 MIPS: Have a fixed-wing drone hold its altitude above ground by measuring the optical flow in the downward direction. Video links: https://youtu.be/IYlCDDtSkG8 and https://youtu.be/X6n7VeU-m_o

Avoid large obstacles (2003), 264 pixels, 30 MIPS total: Have a fixed-wing drone avoid obstacles by turning away from directions with high optical flow. Video link: https://youtu.be/qxrM8KQlv-0

Avoid a large cable (2010), 128 pixels, 60 MIPS: Have a rotary-wing drone avoid a horizontal cable by traveling forward in an up-down serpentine path.

Yaw control (2009), 8 pixels, 20 MIPS (overkill): Visually stabilize yaw angle of a coaxial helicopter without a gyro. Video link: https://youtu.be/AoKQmF13Cb8

Hover in place (2009), 250-1024 pixels, 20-32 MIPS: Have a rotary-wing drone hover in place and move in response to changing set points using omnidirectional optical flow. Video links: https://youtu.be/pwjUcFQ9b3A and https://youtu.be/tvtFc49mzgY

Hover and obstacle avoidance (2015), 6400 pixels, 180 MIPS: Have a nano quadrotor hover in place, move in response to changing set points, and avoid obstacles. Video link: https://youtu.be/xXEyCZkunoc

What we see is that a wide variety of control stratagems can be implemented with just a few hundred pixels and with processor throughputs orders of magnitude less than a contemporary GPU. Even omnidirectional stereo vision was implemented with just thousands of pixels. Most notable was yaw control, which was performed with just eight pixels! The above are not academic simulations- they are  solid existence proofs that these behaviors can be implemented in the stated resolutions. The listed demonstrations did not achieve the reliability of flying insects, but then again Nature, by evolving populations of insects in the quadrillions over 100 million years, can do more than a tiny group of engineers over a few years.

There are several implications of all this. First, it would seem that the race for more pixel resolution that the image sensor industry seems to be pursuing is not a panacea. Sure- more pixels may yield a higher spatial resolution, bringing out details missed by coarser images, but at the cost of producing much more raw data than what is needed, and certainly much more than what is easily processed if you don’t have a GPU at your disposal!

Second, this begs the question- are GPUs really the cure-all for all situations in which processing throughput is a bottleneck? Don’t get me wrong- I love the idea of having a few teraops to play with. But is this always necessary? For many applications, even those involving vision-based control of a drone, perhaps we are better off grabbing just fewer pixels and using a simpler but better tuned algorithm.

Personally, I am a big fan of the so-called 80/20 principle, which here implies that 80% of the information we can use comes from just 20% of the pixel information. The 80/20 principle is recursive- the top 4% may provide 64% of the value, and taken to an extreme the top fraction of a percent of the pixels or other computational elements of a vision system will still provide within an order of magnitude the information as that of the original set. It might seem like information is thrown away, until you realize that it is much easier to process a thousand pixels than a million. I wonder what other implications there are of this to machine vision and to artificial intelligence in general…

Third, this is very good news for nano drones, or even future insect-sized “pico” drones- if we just need a few thousand pixels and a hundreds of MIPS of processing throughput, current semiconductor processes will allow us to make a vision system within a small fraction of a gram that supports this. Of course, we need the RIGHT thousand pixels and the RIGHT algorithms!

Thank You for indulging me in this. Please let me know what you think.

Read more…

3689672167?profile=original(More info and full post here)

I've been experimenting with putting 360 degree vision, including stereo vision, onto a Crazyflie nano quadrotor to assist with flight in near-Earth and indoor environments. Four stereo boards, each holding two image sensor chips and lenses, together see in all directions except up and down. We developed the image sensor chips and lenses in-house for this work, since there is nothing available elsewhere that is suitable for platforms of this size. The control processor (on the square PCB in the middle) uses optical flow for position control and stereo vision for obstacle avoidance. The system uses a "supervised autonomy" control scheme in which the operator gives high level commands via control sticks (e.g. "move this general direction") and the control system implements the maneuver while avoiding nearby obstacles. All sensing and processing is performed on board. The Crazyflie itself was unmodified other than a few lines of code in it's firmware to get the target Euler angles and throttle from the vision system.

Below is a video from a few flights in an indoor space. This is best viewed on a laptop or desktop computer to see the annotations in the video. The performance is not perfect, but much better than the pure "hover in place" systems I had flown in the past since obstacles are now avoided. I would not have been able to fly in the last room without the vision system to assist me! There are still obvious shortcomings- for example the stereo vision currently does not respond to blank walls- but we'll address this soon...

Read more…

eoU6wviPzAMERIWnGof_7BPpNcgrMBYhWWnEz7I-7Ig=w1143-h857-noI've been working on adding visual stabilization to a Crazyflie nano quadrotor. I had two goals- First is to achieve the same type of hover that we demonstrated several years ago on an eFlite 'mCX. Second is to do so in extremely low light levels including in the dark, borrowing inspiration from biology. We are finally getting some decent flights.

Above is a picture of our sensor module on a Crazyflie. The Crazyflie is really quite small- the four motors form a square about 6cm on a side. The folks at Bitcraze did a fantastic job assembling a virtual machine environment that makes it easy to modify and update the Crazyflie's firmware. Our sensor module comprises four camera boards (using an experimental low-light chip) connected to a main board with an STM32F4 ARM running. These cameras basically grab optical flow type information from the horizontal plane and then estimate motion based on global optical flow patterns. These global optical flow patterns are actually inspired from similar ones identified in fly visual systems.The result is a system that allows a pilot to maneuver the Crazyflie using control sticks, and then will hover in one location when the control sticks are released.

Below is a video showing three flights. The first flight is indoors, with lights on. The second is indoors, with lights off but with some leaking light. The third is in the dark, but with IR LEDs mounted on the Crazyflie to work in the dark.

There is still some drift, especially in the darker environments. I've identified a noise issue on the sensor module PCB, and already have a new PCB in fab that should clean things up.

Read more…

2832_2bc8ae25856bc2a6a1333d1331a3b7a6.png(Image of Megalopta Genalis by Michael Pfaff, linked from Nautilus article)

How would you like your drone to use vision to hover, see obstacles, and otherwise navigate, but do so at night in the presence of very little light? Research on nocturnal insects will (in my opinion) give us ideas on how to make this possible.

A recent article in Nautilus describes the research being performed by Lund University Professor Eric Warrant on Megalopta Genalis, a bee that lives in the Central American rainforest and does it's foraging after sunset and before sunrise when light levels are low enough to keep most other insects grounded, but just barely adequate for the Megalopta to perform all requisite bee navigation tasks. This includes hovering, avoiding collisions with other obstacles, visually recognizing it's nest, and navigating out and back to it's nest by recognizing illumination openings in the branches above. Deep in the rainforest the light levels are much lower than out in the open- Megalopta seems able to perform these tasks when the light levels are as low as two or three photons per ommatidia (compound eye element) per second!

Professor Warrant and his group theorize that the Megalopta's vision system uses "pooling" neurons that combine the acquired photons from groups of ommatidia to obtain the benefit of higher photon rates, a trick similar to how some camera systems extend their ability to operate in low light levels. In fact, I believe even the PX4flow does this to some extent when indoors. The "math" behind this trick is sound, but what is missing is hard neurophysiological evidence of this in the Megalopta, which Prof. Warrant and his colleagues are tying to obtain. As the article suggests, this work is sponsored in part by the US Air Force.

You have to consider the sheer difference between the environment of Megalopta and the daytime environments in which we normally fly. On a sunny day, the PX4flow sensor probably acquires around 1 trillion photons per second. Indoors, that probably drops to about 10 billion photons per second. Now Megalopta has just under 10,000 ommatidia, so at 2 to 3 photons per ommatidia per second it experiences around 30,000 photons per second. That is a difference of up to seven orders of magnitude, which is even more dramatic when you consider that Megalopta's 30k photons are acquired omnidirectionally, and not just over a narrow field of view looking down.

Read more…

Hardware Startup Groups Unite!

3689472496?profile=originalRecently I started the DC Hardware Startup meetup as a complement to the many Web 2.0-ish entrepreneur meetups in the DC area and elsewhere, and then reached out to the organizers of similar meetups around North America. Two are about a year old- the SF Hardware Startup meetup and Solid State Startups, both near Silicon Valley. Two others, one based in New York City and one based in Toronto, are also at most about a month old. The common themes are as you'd expect- rapid prototyping / 3D printing, lean methods/manufacturing, open source HW, etc., and of course entrepreneurship.


I've been in contact with most of the above groups, and we agree there is something larger than anyone one regional meetup group. We would like to find a way to bring these communities together, and I'd be happy to take your input there. If any new similar groups form, please do let us know. In the mean time, Nick Pinkston of the SF group put together a few initial resources as a seedling:

Reddit: http://www.reddit.com/r/hwstartups

Twitter ('bot feed from Reddit): @HWStartups

Hardware Overflow prototype site: http://area51.stackexchange.com/proposals/42563

 

Below is the opening description of the DC Hardware Startup group:

-----

We are DC-area technology entrepreneurs and innovators who want to build viable and vibrant business based on hardware, whether electronic, mechanical, or other.

Atoms are the new bits: We believe that the time is ripe for a new wave of entrepreneurship in hardware in the United States and worldwide.

Lean startup / pretotype methodologies: Thanks to inexpensive contract manufacturing, domestic and overseas, 3D printing, programmable hardware, and an Internet to propagate ideas, the principles of "lean startup" methodologies are as applicable to hardware startups as to software startups. Hardware "minimal viable products (MVPs)" can be build in weeks or days, and "pretotypes" can often be formulated in just hours or even minutes.

Identify the right product to build before you mass produce: This classic rule still holds. How can we identify the right hardware product to build without expending too many resources?

Unique challenges for hardware: Hardware startups have their own unique logistic challenges- inventory management, component sourcing, and manufacturing to name a few. We will discuss and learn how to handle these issues as well.

Inventor to entrepreneur: We hardware folks love to tinker and invent. This is pure fun! Can we apply this tinkering mindset to build business, not just gizmos?

If your product moves, flies, makes a sound, has blinking lights, or does something physical, come join us!

 

 

 

Read more…

I was pleased to see a cool laser rangefinding project on Kickstarter- I hope this project gets fully funded (and I'm a backer). I've actually been experimenting myself with structured light and laser rangefinding using our ArduEye hardware and thought I'd share it here.

3689465772?profile=original

The setup is very simple- An Arduino Pro Mini serves as the computing backbone of the device. Via a 2N2222 transistor (I know I know...) the Arduino can on and off a red laser module. The Arduino is connected to an ArduEye breakout board with one of Centeye's Stonyman image sensor chips and a cell-phone camera lens. The whole setup (excluding the red FTDI thing) weighs about 10.9 grams. I think we can reduce that to maybe 4 or 5 grams- the laser module weighs 1.9 grams and is the limiting factor.

The principle of operation is straight forward- the laser is mounted horizontally from the image sensor by a known baseline distance. The Arduino first turns off the laser and then grabs a small image (3 rows of 32 pixels in this implementation). Then the Arduino turns the laser on and grabs the same pixels. The Arduino then determines which pixel experienced the greatest increase in light level due to the laser- that "winning point" is the detected location of the laser in the image. Using this location, the baseline distance, the lens focal length, the pitch between pixels on the image sensor, and basic trigonometry, we can then estimate the detected distance. I haven't yet implemented this final distance calculation- my main interest was seeing if the laser could be detected. The above video shows the system in operation.

In practice, I've been able to pick up the laser point at a distance of up to about 40 feet- not bad for a 2 mW laser. In brighter lights you can put an optical bandpass filter that lets through only laser light- with this the system works at distances of say 10 feet even in 1 klux environments e.g. a sunlit room. If you are using this for close ranges, you can turn up the pulse rate and grab distances at up to 200Hz. How does an Arduino grab and process images at 200Hz? Easy- at 3x32 it is only grabbing 96 pixels!

Read more…

Awhile ago we (Centeye) started ArduEye, a project to implement an open source programmable vision sensor built around the Arduino platform. The first ArduEye version used a simple Tam image sensor chip and a plastic lens attached directly to the chip. After much experimentation and some feedback from users, we now have a second generation ArduEye.

The second generation ArduEye is meant to be extremely flexible, ultimately allowing one to implement a wide variety of different sensor configurations. A basic, complete ArduEye is shown below, and contains the following basic components:

3689446300?profile=original

An Arduino- Currently we are supporting Arduino UNO-sized boards (e.g. UNO, Duemilanove, Pro) and the Arduino MEGA. When the ARM-driven DUE comes out, we will surely support that as well.

A shield board- this board plugs into the Arduino, and has a number of places to mount one or more image sensor breakout boards. This shield also has places to mount an optional external ADC as well as additional power supply capacitors if desired.

A Stonyman image sensor on a breakout board- The Stonyman is a Centeye-designed 112x112 resolution image sensor chip with an extremely simple interface: 5 digital lines in, which are pulse in predefined sequences, and one analog line out, which contains the pixel. The Stonyman chips are wirebonded directly to a 1-inch square breakout board, which can plug into the shield.

Optics- Possibilities include printed pinholes, printed slits, and cell-phone camera lenses, depending on what you want to do.

Example application- The "application" is an Arduino sketch programmed into the Arduino. This sketch determines what the ArduEye does. One sketch can make it track bright lights, another sketch can measure optical flow, and so on. We are releasing, initially, a base sketch that demonstrates light tracking, optical flow, and odometry. Let us know what other example applications you would like to see.

ArduEye libraries- These libraries are to be installed in your Arduino IDE's "libraries" file, and include functions to operate the Stonyman image sensor chip as well as acquire and process images, including measuring optical flow.

GUI- Finally, we created a basic GUI that serves as a visual dump terminal for the ArduEye. You can now communicate with the ArduEye via either the GUI or the basic Arduino IDE's serial terminal. The GUI was written in Processing.

We designed the system to allow easy hacking to implement a wide variety of vision sensors by exploring combinations of optics, image sensing, and image processing. I personally find it useful, and actually use this system for prototyping things at Centeye- I can prototype a new vision sensor in just a couple hours. The target applications are quite broad and include just about anything that may use embedded vision, whether robotics, sensor nets, industrial controls, interactive electronic sculptures (yes this has come up), and so forth.

The video at the top shows some of the basic things you can do with this ArduEye. You'll see the ArduEye interfacing with a host PC using both the Arduino IDE's serial terminal and the ArduEye GUI. For more details, including links to the hardware design files and source code, go to the ArduEye wiki site. The site is a work in progress, but is adequate to get people started. The sample "first application" and GUI is what was used to generate the above video.

Right now we are having 200 Stonyman breakout boards being assembled- they should be ready within a month. We'll make more if this is well-received. We can assemble a few in-house at Centeye- I'll do this if enough people twist my arm and promise to really play with the hardware. :)

Please let me know your thoughts. In particular, are there any other "sample application" sketches you'd like us to implement?

 

 

Read more…

I've been experimenting with using an Arduino-powered vision system to detect and locate point light sources in an environment. The hardware setup is an Arduino Duemilanove, a Centeye Stonyman image sensor chip, and a printed pinhole. The Arduino acquires a 16x16 window of pixels centered underneath the pinhole, which covers a good part of the hemisphere field of view in front of the sensor. (This setup is part of a new ArduEye system that will be released soon...)

The algorithm determines that a pixel is a point light source if the four following conditions are met: First, the pixel must be brighter than it's eight neighbors. Second, the pixel's intensity must be greater than an "intensity threshold". Third, the pixel must be brighter, by a "convexity threshold", than the average of it's upper and lower neighbors. Fourth, the pixel must similarly be brighter, by the same threshold, than the average of it's left and right neighbors. The algorithm detects up to ten points of light. The Arduino script then dumps the detected light locations to the Arduino serial monitor.

A 16x16 resolution may not seem like much when spread out over a wide field of view. So to boost accuracy we use a well-known "hyperacuity" technique to refine the pixel position estimate to a precision of about a tenth of a pixel. The picture below shows the technique: If a point of light exists at a pixel, the algorithm constructs a curve using that pixel's intensity and the left and right intensities, then interpolates using a second order Lagrange polynomial, and computes the maxima of that polynomial. This gives us "h", a subpixel refinement value that we then add to the pixel's whole-valued horizontal position. The algorithm then does something similar to refine the vertical position using the intensities above and below the pixel in question. (Those of you who have studied SIFT feature descriptors should recognize this technique.) The nice thing about this technique is that you can get the precision of a 140x140 image for "light tracking" without exceeding the Arduino's 2kB memory limit.

3689442260?profile=original

The algorithm takes about 30 milliseconds to acquire a 16x16 image and another 2 or 3 milliseconds to locate the lights.

The first video shows detection of a single point light source, both with and without hyperacuity position refinement. When I add a flashlight, a second point is detected. The second video shows detection of three lights (dining room pendant lamps) including when they are dimmed way down.

It would be interesting to hack such a sensor with a quadrotor or another robotic platform- Bright lights could serve as markers, or even targets, for navigation. Perhaps each quad rotor could have an LED attached to it, and then the quad rotors could be programmed to fly in formation or (if you are brave) pursue each other.

With additional programming, that sensor could also implement optical flow computations much like I had done in a previous post.

SOURCE CODE AND PCB FILES:

The main Arduino sketch file can be found here: LightTracker_v1.zip

You will still need library files to run it. I've put these, as well as support documentation and Eagle files for the PCBs, in the downloads section of a Google Code project file, located here: http://code.google.com/p/ardueye-rocket-libraries/downloads/list

Read more…

 

I've been working on a new version of our ArduEye using one of our "Stonyman" image sensor chips and decided to see if I can grab four dimensions of optical flow (X shift, Y shift, curl, and divergence) from a wide field of view. I wirebonded a Stonyman chip to a 1" square breakout board, and attached it to an Arduino Mega256 using a simple connecting shield board. I then glued a simple flat printed pinhole onto the chip using (yay!) 5-minute model airplane epoxy. With a little black paint around the edges, the result is a simple low resolution very wide field of view camera that can operated using the Arduino.

3689437396?profile=original3689437324?profile=original3689437505?profile=original

I programmed the Arduino to grab five 8x8 pixel regions- region 0 is forward while the other four regions are about 50 degrees diagonally off forward as shown. In each region the Arduino computed X and Y optical flow and odometry (essentially an accumulation of optical flow over time).

To compute X and Y shift, the algorithm summed respectively the X and Y odometry measurements from the five pixel regions. These are the first two dimensions of optical flow that most people are familiar with. To compute curl and divergence, the algorithm added the appropriate X or Y odometries from the corresponding pixel regions. For curl this results in a measurement of how the sensor rotates around it's forward axis. For divergence this results in a measurement of motion parallel to the forward axis.

3689437341?profile=original

In the current configuration the system operates at about 5 to 6 Hz, though when the serial dump is on that slows to about 2 Hz. Most of the delay is in the acquisition and involves wasteful array lookups to select which pixels to read out. Using an external ADC (which the middle board supports) and better code there is room for probably an order of magnitude speed increase.

The video shows a few test runs where I exposed the sensor to three of the four fundamental motions. Y shift was implemented using an air track (like some of you used in physics class). Curl motion was implemented with the aid of a well-loved turntable. Divergence was implemented by hand by moving the sensor to and from clutter. The corresponding plots show the response of all four motions, with the "correct" one emphasized.

You can see that the four components are largely independent. There is some crosstalk- curl and divergence tend to be the biggest recipients of crosstalk since they are effectively a difference between signals (and getting an accurate number by subtracting two noisy numbers is not easy). Factors such as varying distances around the camera can cause uneven stimulation of the different pixel fields, resulting in phantom curl and div. There is also a little bit of drift. There is a lot of room for optimizing the system for sure.

One immediate improvement would be to use two of these Stonyman cameras back-to-back so that near omnidirectional sensing could be performed. This would give us more information to separate the different components (X,Y,curl,div) as well as allow us to separate out the other two axes of rotation from X and Y.

A setup similar to this formed the basis for our recent single sensor yaw and heave (height) stability sensor demonstration.

What could something like this be used for? You could put it on a ground vehicle and do some odometry with it, either looking down or even looking up, though for looking up the odometry measurements would depend on distance to other objects in the environment. You could also mount this on a quad looking down- X and Y would give your basic optical flow for sideways drift regulation. Curl give you yaw rotation (though you already have that with a gyro). Divergence is most interesting- it would tell you about change in height.

You could also implement something similar with five of Randy's optical flow sensors aimed to look in the same five directions. (You could probably dispense with sensor 0 to save weight/cost in this case.)

Read more…

This device is no match for an Randy's sensor, but it does (minimally) work. Think of this little project as a fun hack more than anything else. But with some tweaking and size reduction someone could probably implement an occasionally working altitude hold sensor for a fixed-wing RC aircraft.

This optical flow sensor uses CdS cells as light sensing elements. Recall that a CdS cell is basically a resistor whose value changes with illumination- more light results in less resistance. The fundamental sensing structure here is a pair of CdS cells connected in series to form a voltage divider. The middle node between the CdS cells forms the output. When both cells are equally illuminated, the output voltage is midway between Power and Ground (assuming the CdS cells are matched). If one cell is illuminated more than the other, the output voltage varies accordingly. An interesting quality of this CdS cell pair is that if you, say, double the amount of light striking both cells, the output changes very little.

Nine of these CdS cell pairs are laid out in a row, as shown in the video. Pay attention to the photo below to see how the CdS cells are placed and how they overlap within the array. The nine resulting outputs go to ports A0 through A8 (analog inputs 0 through 8) of an Arduino Mega. This project required a 'Mega because of the number of analog input signals.

3689435623?profile=original

For those of you with an image processing background, you can say that a CdS cell pair forms a simple analog edge detector, and that adjacent edge detectors are 120 degrees out of phase.

As light patterns travel across the CdS array, the nine analog signals will vary accordingly and can be interpreted by a basic one dimensional optical flow algorithm. For example, if a shadow moves left to right across the array, a pulse or step function will appear in sequence across ports A0 through A8 in sequence (or the other direction) which indicates visual motion.

To obtain an image, I just used a slit opening, which is a variation of a pinhole camera. This slit opening was oriented perpendicular to the CdS array, which preserves visual information parallel to the CdS array and smooths out information perpendicular to it. This helps make the array more sensitive to 1D visual motion in the desired direction. (For a rough metaphor, think of a bar code.)

I mounted all the electronics into a shoebox using masking tape. (For a more professional and durable version, use duct tape!) I also placed dark construction paper on the inside of the box to prevent light from bouncing around. I cut a slit opening in the box top as shown to be positioned over the CdS array.

The output can be read in two ways- The Arduino port D3 generates a PWM signal that, when connected to the RC network shown, can generate an analog output representing the optical flow (5V = max positive, 0V = max negative, 2.5V = zero). Alternatively you can read it out using the Arduino environment's Serial display.

The sensor is crude but does work. It needs a lot of light to function- it should work in a bright indoor environment but works better with natural outdoor lighting, say several hundred lux and up.

The Arduino sketch is attached here: CdS_OF_Sensor_r1.pde

Have fun!

Read more…

As part of Centeye's participation in the Harvard University Robobee project, we are trying to see just how small we can make a vision system that can control a small flying vehicle. For the Robobee project our weight budget will be on the order of 25 milligrams. The vision system for our previous helicopter hovering system weighed about 3 to 5 grams (two orders of magnitude more!) so we have a ways to go!

We recently showed that we can control the yaw and height (heave) of a helicopter using just a single sensor. This is an improvement over the eight-sensor version used previously. The above video gives an overview of the helicopter (a hacked eFlite Blade mCX2) and the vision system, along with two sample flights in my living room. Basically a human pilot (Travis Young in this video) is able to fly the helicopter around with standard control sticks (left stick = yaw and heave, right stick = swash plate servos) and, upon letting go of the sticks, the helicopter with the vision system holds yaw and heave. Note that there was no sensing in this helicopter other than vision- there was no IMU or gyro, and all sensing/image processing was performed on board the helicopter. (The laptop is for setup and diagnostics only.)

The picture below shows the vision sensor itself- the image sensor and the optics weigh about 0.2g total. Image processing was performed on another board with an Atmel AVR32 processor- that was overkill and an 8-bit device could have been used.

3689435112?profile=original

A bit more about optics: In 2009 we developed a technique for "printing" optics on a thin plastic sheet, using the same photoplot process used to make masks for, say, making printed circuit boards. We can print up thousands of optics on a standard letter size sheet of plastic for about $50. The simplest version is a simple pinhole, which can be cut out of the plastic and glued directly onto an image sensor chip- pretty much any clear adhesive should work.The picture below shows a close-up of a piece of printed optics next to an image sensor (the one below is a different sensor, the 125 milligram TinyTam we demonstrated last year).

3689435074?profile=originalThe principle of the optics is quite understandable- a cross section is below. The plastic sheet has a higher index of refraction than air, thus a near hemisphere field of view of light may be focused onto a confined region of the image sensor chip. You won't grab megapixel images in this manner, but it works well for the hundreds of pixels needed for hovering systems like this.

3689435090?profile=original

We are actually working on a new ArduEye system, using our newer Stonyman vision chips, to allow others to hack together sensors using this type of optics. A number of variations are possible, including using slits to sense 1D motion or pinhole arrays to make a compound eye sensor. If you want more details on this optics technique, you can visit this post, or you can pull up US patent application 12/710,073 on Google Patents. (Note: We are planning to give a blanket license of the patent for use in open hardware systems.)

(Sponsor Credit: "This work was partially supported by the National Science Foundation (award # CCF-0926148). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.")

Read more…

3689424002?profile=original

This is the third part of a three-part posting on chip design and how to reconcile it with the open source and DIY movements. (Part 1 is here and part 2 is here.) In this part I will discuss economics- How much does it actually cost to fabricate a chip? Can batching be used to make the cost accessible to a group of (hypothetical) casual chip designers?

 

Masks

Once a chip design is “taped out”, the chip foundry who manufactures them first creates a set of masks- anywhere from around 15 to upwards depending on the design and the manufacturing process. These are for a photolithography process that is vaguely similar to those used in PCB manufacture, however for chips the masks are both more precise and more expensive.

I won’t give exact costs or list specific foundries, but I’ve seen masks cost anywhere from under $10k to over $50k for a complete set. This is for a 0.5um or 0.6um process. Obviously the masks for the latest 32nm process would be much more.

 

Reticle

Typically in chip design the foundry creates a set of masks for a “reticle”, a box-like region that gets replicated over and over across a whole wafer using a stepping process. A wafer itself is a disc of silicon less than a millimeter thick but generally tens of centimeters in diameter.

The typical reticle size I use is about 21mm x 21mm. (The masks themselves are much larger than this but the image is optically reduced during the manufacturing process.) You can fill up that reticle pretty much any way you want- you can put in a single 21mm x 21mm chip. If your chip size is just 2mm x 2mm you can put in a 100 of these into the reticle, so every reticle gets you 100 chips. You could also put in 100 different designs. This is where batching would come in.

 

Batching

There are companies that provide batching services. One of the oldest is the MOSIS service, run by ISI of the University of Southern California. MOSIS was set up originally with DARPA and NSF grants as a way to bring chip design to universities, and give students the ability to design and fabricate a chip, either as a classroom exercise or for a research grant. MOSIS also offered their services to industry. To this day they still offer these services and actually serve as a “store front” for several major chip foundries (ON-Semiconductor, TSMC, IBM, and others) for customers who want to prototype chips without paying for a whole set of masks.

The economics are essentially that everyone shares the cost of the tooling. Let’s say a mask set for a reticle has 10 different designs and cost $20k to make (a somewhat made up number)- that comes down to $2k per design- a much more reasonable number!

There are other services similar to MOSIS, and there are also individual companies that offer “multi-project runs” specifically for smaller customers that want to batch-prototype chips. So the batching concept is clearly established. In fact, whenever I do a run of silicon at Centeye I also place multiple designs on one reticle to get the most for my money.

 

Hypothetical Cost Breakdown

So let’s suppose a company wanted to get into the chip batching business. Let’s say the company decides to accept 2mm x 2mm size chips, and place 100 different designs onto a reticle. (The remaining 1mm slivers could be used for test circuits and quality control…) Looking at pure costs alone (e.g. neglecting stuff like “overhead”, “labor”, and “profit”), the numbers might look like this:

 

Mask set for a 21mm x 21mm reticle: $20,000

6”/150mm diameter wafers, set of 10 (approx 30+ reticles per wafer): $10,000

Dicing the wafer up into chips: $500 per wafer

 

First consider prototype quantities- I don’t know of any foundry that will make a single wafer- typically a set of wafers are manufactured in case one or a couple of them fail quality assurance inspections. For initial prototyping you would end up dicing just one wafer. Next comes packaging- most customers are not equipped to work with bare die, so they would probably want the chips in a DIP or similar package that they can then solder to a board or press into a breadboard- this would probably cost at most $20 per chip, at cost. Total cost per customer: ($20k + $10k + $500)/100 = $305 for about 30 chips, plus $20 per chip packaged.

Next let’s consider a slightly higher quantity price break, by dicing up all 10 wafers. The total cost per customer rises to ($20k + $10k + 10 x $500) = $350 for about 300 chips, not including packaging.

These numbers are encouraging. Of course, we have to assume 100 such customers can be found, and we have to consider the other costs to stay in business, but the above numbers should give you a starting point.

 

So where does that put us?

Once we factor in the cost of doing business, we get upwards to a thousand dollars for a batch-run prototype chip. This is a stiff amount compared to a batch-fabbed PCB. But it is not impossible- This amount is easily within the budget of a Kickstarter project, and there are hobbyists that would be willing to spend this amount on a chip fab. Certainly small companies could spend this amount. Also note that due to the nature of the chip manufacturing process, there could be several hundred individual chips available for use (or sale) if the design works. (There are a number of caveats, of course, which I didn’t mention here, but can discuss below if there is interest.)

I think one of the challenges, though, is overcoming the fear of spending money on a fabrication that doesn’t work. Getting back a PCB that doesn’t work is never fun; the stakes are higher for chips because of both the higher cost and the long lead times (generally six weeks or more). This is where the combination of good design tools and good design practices can help out- I think the “abstract layout” workaround mentioned in my last post, properly executed, could make “probability of success” sufficiently high for the DIY crowd.

 

Read more…

3689423258?profile=original

Above: Concept for an "abstract layout" workflow that could reconcile open source circuit designs with "closed" process design rules and libraries. First the designer would generate an abstract layout by instancing cells for various components. The above example shows a 2-input AND gate, with one input pulled to ground with a 10k resistor, and a tri-state buffer as an output. The designer would instance these cells, and then route connections between them. The designer would also place cells corresponding to "pads" at the periphery of the chip. The designer would not need to know the exact layout of the cell interiors- this may be kept "closed" even while the abstract layer itself is "open". This abstract layout may then be converted to a full detailed layout that may be used by a foundry to fabricate the chip.

Introduction

This is the second part of a three-part blog posting on chip design and how to reconcile it with the open source and DIY movements. Previously in Part 1, I gave a top level summary of “indie chip design” as I experience it. In this part I will discuss the real issues I would face if trying to “open” up a chip design. There are three areas to consider: the CAD design tools themselves, the design rules for a particular chip fab process, and design libraries.

First a few definitions: The term “foundry” refers to a company that performs the actual chip fabrication. The term “process” refers to a particular fabrication line of a foundry. A foundry may have several processes including ones optimized for digital circuity and others designed for analog. Typically a foundry will have several grades of processes, with more expensive ones having more capabilities or a smaller feature size. The term “design tools” refers to the CAD tools that one may use to design a chip (analogous to Eagle for PCBs). The term “design rules” refers to specifications such as minimum width, spacing, and other requirements for the different layers.

Chip Design Tools: Open source versions do exist!

First I present some good news- Open source chip design tools do exist. One of the most prominent versions is Magic, which was created in the mid 1980’s and has gone through multiple revisions as late as 2008. I personally used versions 6.4 and 6.5 for all of Centeye’s chip designs from 2001 through 2004, including designs with up to 400,000 transistors. Magic is somewhat derided by many chip designers because it automates the generation of some layers and restricts you to purely Manhattan geometries e.g. no diagonal features. However in practice these restrictions affect only a minority of overall chip designs and do not cost much in terms of layout space. Magic has a real-time design rule verifier- you will know pretty much right away if your layout has violated a design rule. I found this useful when first learning chip design. Magic is also able to create CIF and GDSII files, the chip design equivalent of Gerbers used for PCBs.

Magic was originally designed to run on Unix and Linux operating systems, but it has more recently been ported to Windows (using Cygwin) and is maintained here at Open Circuit Design. Unfortunately it appears that recent development efforts have slowed, so I am not sure if Magic is as actively used as in the past.

Other open source layout tools exist as well, for example Electric. There is also a free, but not open source, tool called Lasi (pronounced “lazy”).

For circuit verification there are also free circuit simulators, most notably the venerable SPICE.

Commercial chip design tools exist as well (if you have the money...)

There are a number of commercial design tools available as well. I now use Tanner Tools, while others (with more resources) may use Cadence or Synopsys. The cost of Tanner Tools is within reach for a small company (on the order of the price of a new car- depreciation is my friend!) but is out of reach for most hobbyists. The other latter tools can cost, as far as I know, a upwards to a million dollars per seat, but allow wide-scale cooperation between large teams of designers.

As for why I switched from Magic to Tanner- Around 2005 we were migrating to a new foundry and process and were anticipating designing more complicated chips, so I felt the need for something “professional”. Tanner Tools has worked very well for us. However- and this would be a good story for another post- we succumbed to “complexity creep”, thus producing complicated (and headache inducing) designs, but later reversed course, so that now are designs are more minimalist again. I could go back to Magic, however since I now have the Tanner and have built up my own libraries within it, it is easier stay with Tanner. There are also other issues, which will be discussed next.

The bad news: Design rules, setup files, and design kits are generally not open.

As I see it, the biggest issue preventing bringing open source to the chip design rule is not the design tools themselves, but the design rules! The design rules on a PCB are relatively easy to set up and for programs like Eagle are freely available. They are also easy to understand. For chips, however, the design rules are generally treated as proprietary by the foundry. This is because the design rules contain information that describes the capabilities of a process, and the foundries don’t want that information freely disclosed without restrictions. Yes- I had to sign NDAs to get the design rules for all of the processes we use now! Similarly, the “setup files” that design tools use to configure a design program for designing chips for a particular process are also generally treated as proprietary by the company that sells the tools.

The same applies to the “design kits” themselves- these are basically library files that contain cells for different subcircuits you may want to use in a chip design. This includes cells such as gates, flip flops, or basic analog components. This also includes essential cells such as the “pads” that allow a chip to be interfaced with the outside world.

And this, folks, is why we cannot open source our chip designs- I would be violating all the NDAs I had signed with these companies! The most I could do (which I have already done in one case) is to release a schematic of the chip, in PDF form, that contains details on the circuits I had designed, and then “black boxes” for anything that came from a design kit.

Overall the trend seems to be for new advances to be made in the “closed” world, with open design tools and design kits being increasingly obscure and obsolete.

Is there a workaround?

I can see two possible workarounds. One method is, in fact, a workaround that has been used in the past and (I believe) is still available to an extent. The other is one that comes to my mind. Both would require some level of cooperation from the chip foundry.

Workaround #1: Use simplified, genericized design rules

During 2000 through 2004 when I was using Magic, I also used the MOSIS service for chip fabrications. MOSIS is effectively a brokerage service that places different designs from different people onto the same wafer, thus allowing everyone to share the tooling costs. (I’ll talk more about that in Part 3 of these posts.) When using MOSIS, you had the option of using either the design rules from the foundry (requiring an NDA), or you could use more generic design rules that MOSIS set up and made freely available.

The MOSIS rules had the advantage of being a bit more transparent and easy to understand. Also a design made for one process could be fabricated on any process that supports the same design rules. However they had several weaknesses- First, designs made using these rules are not a compact as they would be using the native foundry rules. In practice this penalty would be small, on the order of a few percent at most. Second, the MOSIS rules may not allow you to use all the features that a particular foundry or process supports. Third, the MOSIS rules are compatible with only a few processes- A look at the MOSIS website reveals that these rules are usable with two foundries. If you are willing to be constrained to just these two foundries (and both are good!), and if you don’t need some exotic unsupported feature, for most designs this approach will work.

The same concept used to make these MOSIS design rules can be applied to other foundries. The only barrier is obtaining the cooperation of the foundry, who may not want details of their process revealed openly or simply may not want to spend resources supporting an additional set of design rules.

Workaround #2: Use an intermediate abstract layout

The second workaround would be to set up an abstract design layer that is more detailed than “schematic” but less than that of “layout”. I envision something like what is depicted on the top of this post- A designer would create an abstract “layout” that instances cells such as gates, pads, or individual discrete components. Each of these cells would have a number of known “ports”, including power/ground lines, inputs, and outputs. For example an “AND2” gate may have two power ports (ground and power), two digital inputs, and one digital output, while a capacitor or resistor may have just two ports. Some cells, of course, would be the pads that allow the chip to be connected to the rest of the world.

The designer would not need to know the specific layout of these cells or even their exact circuit diagram, but would need to know enough to use them in a design. The designer would instead have several page (or more) datasheet describing everything needed to use that cell. For example, it might be known that the AND2 gate requires 5ns to settle and can source up to 10uA of current, or that a specific capacitor cell holds 1pF but has 0.1pF of parasitic capacitance at one node. The designer could then create an abstract layout by dropping these cells at various locations on the chip, much like a PCB designer places “parts” on a circuit board layout. The designer could then draw the wires to connect the components together. Such a design tool could have autorouting features. The design tool would then perform basic design rule checks.

Once the design is complete, this abstract level design file would be exported and sent to either the foundry or a “middle man”. The foundry or middle man would then be responsible for generating the final layout files from the abstract layout. Finally, the foundry could fabricate the chip.

The disadvantage of this approach is that the design is still only “partially open”. However at least the abstract layout could be made “open” and freely shared or released with appropriate open source licenses as desired.

There would be another benefit to this approach- The workflow would be more similar to that of making a PCB, in both the steps taken and the complexity. This approach would make chip design more accessible to a “casual” chip designer.

In my next post, I will discuss economic issues associated with chip fabrication.

Read more…

3689421522?profile=originalAbove: Simple circuit using a transistor (an N-channel MOSFET), a resistor, and a capacitor. Left shows the schematic, right shows the layout.

Following discussions from a previous post here, I’m writing a three-part blog post on chip design, as I experience it professionally, and how I see it relating to both the open source community and the DIY crowd. This first post will discuss the chip design process itself. I am going to focus on what could be called “indie chip design” as it may be executed by a small company or even a single person. Obviously I can’t explain how to design the next hot microcontroller in a single blog post, but you should get a flavor for how this process is similar to and different from, say, designing a printed circuit board (PCB).

First a little background- I work in a small company Centeye that makes image sensor chips for various embedded vision and robotic applications. These designs incorporate both digital and analog circuitry, including pixel circuits, on the same chip. Due to our size (and budget!) we favor a minimalist approach to these image sensors. The chips themselves typically have anywhere from a thousand to at most several million transistors and are simple enough in architecture that one person (e.g. me) can handle the whole design. Our chips are three to six orders of magnitude simpler (by transistor count) than a contemporary processor or GPU chip, which currently contain up to several billion transistors.

The “complexity” of any design is an important consideration. Consider how much the complexity of a PCB can vary. At one end you have a 10-plus layer state-of-the-art computer motherboard. At the other end you can have a single layer PCB that uses a basic Atmel and a few discrete parts to blink an LED. Both of these are “circuit boards”, but one can be designed by a single person in a fraction of an hour while the other requires specialized (e.g. expensive) software and a team of engineers putting in person-years of effort. The design methodologies are certainly different for each board- You can take a cowboy approach and “wing it” when designing the blinking LED board. But the computer motherboard requires careful and rigorous preliminary research, planning, and coordination between the designers. Now I don’t want to imply that I design chips with a cowboy approach (well not usually) but the simplicity of our designs allows for different methodologies than what are needed for, say, designing a new GPU.

 

The PCB design process

As a starting point, that more people here would be familiar with, let’s first consider the PCB design process that one might follow if using a set of tools like Eagle. You might follow these steps:

Specification: Decide what you want to make. Decide some basic components to include (e.g. what processor to use or other desired components). Decide the interface, for example how you power and “talk to” the board if there is an I/O aspect.

Sketching and Research: Sketch out different sections of the board schematic (on paper or with Eagle), read datasheets, and determine what exact components you need. (For example- what voltage regulators do you need, what caps / resistors do those regulators need, and so on.)

Schematic Entry: Enter the schematic diagram.

Board Layout: Place components in desired locations, and then route, route, route to make all the desired electrical connections between them.

Verification: Run design rule checks. You need to verify, for example, that routed wires are thick enough, that unconnected wires are not too close together, and that no routing is too close to the board's edge or to holes. Run electrical rule checks to make sure that you didn’t accidentally connect two nodes that should be separate.

Fabricate: Generate the Gerbers and get the board made.

Populate: Solder components to the board. When done, power it up and test it.

Depending on the board, you may do some of the above steps in parallel and/or repeat earlier steps if you find some problem with the design, for example if the components don’t fit in a desired space.

 

Chip layout vs. PCB layout

The most fundamental way that chip design is different from PCB design is in the purpose of the layout. For PCBs, the "layers" of the layout define the electrical connections between the different components that will be soldered to the board. Some layers define etched copper patterns to form "wires", while other layers define “vias” for electrical contact between layers.

For an example of chip layout, look at the photo on the top of this post for details. On a chip layout, some layers are similarly used to form such electrical connections. Generally these are the “metal” layers with a low resistance, and are often made on the actual chip with aluminum or copper. This appears as "blue" in the above layout. There are also “via” and “contact” layers defining similar connections between layers, visible above as black squares.

However there are other layers that are used to actually form devices when geometric objects are drawn according to well-defined rules. When designing a chip, you not only draw the wiring between transistors, capacitors, and resistors, but you also draw those transistors, capacitors, and resistors themselves.

For example, there is one red layer called “polysilicon” which is a conductor but generally has a higher resistance. If you draw a long, thin wire of polysilicon, you get a resistor. (A long thin wire of metal also gives you a resistor but of much less resistance.) To form a capacitor, you can draw two plates of polysilicon on top of each other, with one plate located on “polysilicon 1” (light red) which is deeper in the chip and the other plate located higher up on “polysilicon 2 (dark red)”. They will be separated by a thin insulator, so that the two plates and insulator form a capacitor. A transistor (such as a MOSFET) can be formed by crossing a polysilicon wire over a box of “active” layer (green). I won’t describe all the possibilities- that could fill a textbooks- but hopefully you get the basic idea.

In other words, the fundamental difference is that on a PCB, the geometric figures you draw define the electrical connections between components, while on a chip, the geometric figures also create the components themselves.

As you can imagine, the “design rule checks” for chips are much more complicated. Fortunately all that is now automated.

 

Cells

A chip design generally uses a hierarchical structure based on what we call “cells”. A basic cell may be an OR gate, a single capacitor, or a single pixel circuit. Then we can create higher level cells by “instancing” and connecting together existing lower level cells, and optionally adding new layout. For example, a pixel cell may contain a couple transistors and a photodiode, and routing for power line and output. Then a pixel array cell can be constructed from a large 2D array of individual pixel cells. Similarly a flip flop cell can be constructed from a few gates, and a register cell can be constructed from an array of flip flop cells. The “top level cell” would be the whole chip.

This hierarchical structure is analogous to the hierarchical structure of a computer program. You have variables and individual instructions at the bottom level, then lower level functions that use these variables and instructions, higher level functions that call these lower level functions, and finally the “main” function at the top.

In a chip design, the cell structure is present in both the schematic level and the layout. Consider a pixel cell: In the schematic entry software I use, elementary cells like “capacitor”, “resistor”, and “N-type transistor” are already defined. The pictures below show examples. To make a pixel cell I would instance the appropriate components (generally some transistors and a photodiode), draw wires to connect them, and create a “symbol” that represents the pixel cell. Then I would create an associated layout for the cell. This would be a drawing of the different layers (metal, polysilicon, active layer, etc.) that together create the electronic circuitry depicted in the schematic.

Once I have created a cell, I would next “verify” it. This includes running design rule checks such as making sure wires are thick enough and that unconnected wires are not too close (sound familiar?). This also includes other more esoteric design rule checks such as making sure the different layers are properly drawn to form devices like transistors and capacitors. I also run a “layout vs schematic” test to make sure that both the schematic and layout versions of the cell show the same circuit. If needed, I may also perform a circuit simulation (using SPICE) to verify that the circuit should function as I intend. In the SPICE simulation I would present different inputs to the circuit and verify that the simulated output is as expected. For example for a NOT gate I would apply a digital 0 and then 1 and verify that the output was the opposite. I would also verify that the threshold voltage (the crossover from 0 to 1) is appropriate.

Once a cell has been completed, I can then construct and verify higher level cells in the same manner. For example I could create a pixel array by instancing individual pixel cells. I would verify the pixel array cell and move on to the next cells until the chip is finished.

One nice thing about the cell architecture is that once a cell is made, you can reuse it in other chip designs, either verbatim or with some changes. This is much like how you can re-use existing source code to write a new program. Also it is possible to acquire libraries of cells for many common circuits- I never design my own flip flop cells- I used existing flip flop cells that either the CAD tool company or the chip foundry has provided- I not only save time but have the peace of mind of using a cell that I know works.

This ability to reuse cell designs is crucial for reducing design time. My record for designing a chip, start to finish, is three hours sitting in a Dupont Circle coffee house in DC back in 2001. All of the cells were reused, with one or two modified, except for the top level cell which was constructed from scratch. (Yes, that chip did work!)

The three pictures below show sample layout and schematic of pixel cells.

3689421588?profile=original

Above: Layout of a single pixel cell. "blue" is the lowest metal layer e.g. metal1, "grey" is the second metal layer e.g. metal2, tan is the third metal layer e.g. metal3. White squares are "vias" between metal layers. The big area with right-hand cross hatches is the "N" side of the photodiode. The "P" side is the chip substrate itself.

3689421652?profile=original

Above: Schematic diagram of a single pixel circuit. Node "rowsel" is used to select a row of pixels. Node "out" is the column output. Nodes "prsupply" and "swvdd" are effectively power supplies for the pixel.

 

 

3689421539?profile=original

Above: 16x16 array of single pixel circuits used to form a 16x16 focal plane array section.

 

The Indie chip design process:

Now I can summarize the process I use to design a new image sensor chip:

Specification: Typically I would hand draw a few critical items in my notebook. This may be the schematic diagram for individual pixel circuits or related subcircuits. This would also include a specification of the interface, such as how I want the digital signals provided to the chip to control the chip, and the nature of the output (analog, digital, etc.)

Sketching and Research: Next I would sketch out a cell hierarchy for the chip, including initial hand-drawn schematic sketches for the most critical cells and how they interact. I would also look through existing designs and libraries for cells that I can reuse and/or modify.

Design the chip, cell by cell: Next I would go through the process of designing the cells that will make up the chip design. I would start out with the basic cells, construct both layout and corresponding schematic and symbols, and do any verification. Generally I would both create and verify a given cell before moving up the hierarchy.

Redesign: Sometimes it just happens that through the process of designing it you realize that something may not work or fit. In this case you just have to go back and change some aspect of the design.

Padframe: The padframe is a cell that contains all the “pads” which are the electrical contacts between the chip and the outside world.

Top level cell: I would then assemble and verify the top level cell of the chip.

Tape-out and Fab: The term “tape-out” is the chip design equivalent to making Gerber files for a PCB. I think this is an archaic term that comes from a time when the layout files were literally written to magnetic tape that was then mailed out to the fab company. Nowadays of course you just create the layout file and email it in.

Dicing and Packaging: The chips come back in wafers (see this previous post). We send them out to a company to dice them e.g. cut them up into individual dies. Packaging refers to the process of connecting the individual dies to some sort of package that allows you to solder them to a board. You can’t solder directly to the chip (the pads are between 60 and 100 microns wide!) so typically one uses a wire bonding machine to connect the chip to a package with 1-mil thick gold wire. We own a wire bonding machine so we just wire bond the chips directly to a test PC board, in a process described here. Once this is done we can then test the chip to verify it works.

Cost? For a single chip design I’ve spent anywhere from $980 to $58k to get multiple (between 4 and 40) copies of a single chip design made. For wafers I’ve spent anywhere from about $18k to just under $100k to get multiple wafers containing multiple chip designs made, generally yielding four to eight thousand chips. (Note that for a fine-scale digital process used to make current GPUs or CPUs, you can probably spend millions.) As you make more the price drops further. The economies of scale are drastic. I will comment on that in another post.

Time? You can get PCBs made the same day if you really need it. For chips I wait anywhere from 6 weeks to 4 months. Yes, the long wait can make one nervous…

 

Simplicity and avoiding feature creep

A final thought on the chip design process as I experience it- Simplicity rules! The final part of verifying a chip design is stressful since you really can’t fully verify the chip design, including how the individual cells interact, until you actually fabricate the chip. (Actually for digital circuits the behavior is generally predictable if you use the right practices, but analog circuits are more complicated.) The best I can do until then is to rely on circuit simulations of a limited set of scenarios, along with double, triple, and quadruple checking, and hope that the resulting design works.

So as a result of this, I’ve found that the best habit to have is to be very choosy about what features to include and keep all aspects of the chip (interface, layout, topology, etc.) as simple as possible. The simpler the chip, the easier the layout and the verification, and the less opportunity you have to screw things up! The 80/20 principle is key here- Only include what is necessary. This can be a challenge because we engineers are notorious for saying “hey we can add this feature, and while we’re at it add that one and this other one too”.

In my next post, I'll discuss issues the real issues I face when trying to "open" up chip designs, and how these may be addressed.

Read more…

3689419257?profile=original3689419121?profile=originalI just got back some new silicon! These are the latest image sensor chips I designed specifically for robotics and embedded vision applications. The pictures above show a full wafer followed by a close-up of the wafer from an angle. There are four chips in each reticle- if you look closely you can see them packed into a rectangle (about 8.8mm by 7.0mm). Shortly after that picture was taken, we had the wafer diced up into individual chips and started playing with them!

One of the chips is named “Stonyman” and is a 112 x 112 image sensor with logarithmic-response pixels and in-pixel binning. You can short together MxN blocks (M and N independently selected from 1, 2, 4, or 8) of pixels to implement bigger pixels and quickly read out the image at a lower resolution if desired. The interface is extremely simple- there are five digital lines that you pulse in the proper sequence to configure and operate the chip, and a single analog output holding the pixel value. With two power lines (GND and VDD) only eight connections are necessary to use this chip.

Another chip is named “Hawksbill” and is a 136 x 136 image sensor, also with logarithmic response pixels (but no binning) and the same interface as Stonyman. What is different about Hawksbill is that the pixels are arranged in a hexagonal format, rather than a square format like Stonyman and 99% of other image sensors out there. Hexagonal sampling is not conventional, but it is actually mathematically superior to square sampling, and with recent advances in signal processing one can perform many image processing operations more efficiently in a hexagonal array than a square one.

3689419206?profile=original3689419277?profile=original

(Above: 8x8 hex pixel layout from CAD tools, Stonyman chip wire bonded to test board- pardon the dust!)

We plan to release the chips in the near future, with a datasheet, sample Arduino script, and (yes!) a schematic diagram of the chip innards. (If anyone *really* wants one now, I can make an arrangement…)

We are also working on a new generation ArduEye sensor shield with these chips. The shield will be matched to an Arduino Mini for small size, and use a 120MIPS ARM for intermediary processing. The design will be “open”, of course. (Note- anyone who purchased an original ArduEye will get a credit towards the purchase of the new version when it comes out.)

(The thrill of getting new chips back is much like that for circuit boards. You designed it, so in theory you know how it works. But you are never 100% sure and there is no datasheet for you to consult other than your own notes or CAD drawings. You are always slightly afraid of getting a puff of smoke when you first power it. No smoke… the circuit breaker didn’t trigger… so all is good. Then you probe it, verify that different portions work as expected, tweak various settings, and finally get it working. The experience is just like that for a PCB except the stakes are higher.)

Read more…

3689391445?profile=originalWe are finally having manufactured a "shield" board for the Arduino platform that interfaces a Centeye image sensor with an Arduino to form a true (if simple) "smart sensor". This particular version is about a simple as you can make it:

Shield Board: The board itself is a simple 2-layer board, and can be connected to either a full-size Arduino board (we've so far tried the Duemilanove and the Pro), or a mini-sized version (we've tried Sparkfun's Pro Mini). You need a 5V board to power the image sensor.

If one doesn't want to use the board with an Arduino, it is of course possible to just use it as a breakout board for the image sensor chip. The board design is open source.

Image Sensor: The shield board is compatible with any of Centeye's current image sensor chips, but for the above version we are using the Tam series- These are very simple image sensor chips requiring only five lines- Ground, Power, Clock, Reset, and AnalogOut. When you pulse Reset, a counter on the chip points to the first row, first column pixel, which is output as an analog value. Pulsing the Clock line advances the counter to the next pixel row-wise. That is it. The chips are available in two resolutions- Tam2 at 16x16 and Tam4 at 4x32, the latter with rectangular shaped pixels. I've also taken a leap of faith and decided to publish the schematic of the chip! We've released the analog portion down to the transistor level, and the digital portion down to the gate level.

Right now we have about 200-250 each of the Tam2 and Tam4 chips in stock, in bare die form.

Optics: The sensor will be shipped with a micro lens (about 1.6mm)- we can ship the board with the lens mounted or unmounted. In the latter case the Tam4 chip will be open and exposed, which is appropriate for people who want to use their own lens.

Sample code: I've also written a sample Arduino sketch that illustrates acquiring an image, obtaining a fixed pattern noise mask, and computing one dimensional optical flow from the 4x32 Tam4 chip. By cutting out the serial monitor and acquiring/computing optical flow on just one 32-element row, we obtained more than 200 frames per second- I think more is possible with further optimization. The source code is "open" and is intended to be a starting point for developing your own application.

Uses: I didn't design this board for any specific application, but it should be able to do some things of interest to robotics. I've come to appreciate that it can be difficult if not impossible to make a "one size fits all" sensor, even if supporting just one mode (like optical flow). This is because there are many parameters that must be adjusted for each application. Thus for this board we are taking a different approach- rather than try to hide the optical flow computation from the user, we want the user to be fully aware of what is going on. In order to really use this sensor, you'll have to do some hacking of the code to tailor it to your application. I'll be honest- this is not for those who are afraid to hack a bit.  But I think the Arduino environment makes it very easy to hack, play around, and try different things. I'd appreciate your feedback on this approach.

As for what I think the board will ultimately support (with some hacking)- in the context of robotics and drones I think that things like wall following or basic terrain following should be doable (if on a forward moving platform). Several sensors should support some basic obstacle avoidance against large obstacles. Fulfilling the role of an downward optical flow sensor for a quad might be doable, but would require some careful optimization and algorithm tuning. I don't think impossible though- If the classic 1980's video game Defender could run, with rendering, on a 2MHz processor, then I would imagine adequate optical flow for a quad could be done with 16MHz. Please note though that I haven't tried any of these things yet with this board.

Just so that you know megapixels aren't needed- here are a few facts to consider:

1) Most flying insects have from several hundred to several thousand pixels. They have at most "kilopixels"! 2) We demonstrated altitude hold in 2001-2002 with 16 to 88 pixels, and obstacle avoidance in 2003 with 264 pixels total. 3) We also controlled the yaw angle of a helicopter with 8 (eight) pixels.

For the released files (including the Tam chip schematic), please visit this posting at Embedded Eye. For other details including purchasing, please visit this page at Centeye. Current asking price is $100 per board plus $9 shipping/handling for US customers. We hope we can lower this price in the future as we learn to automate working with bare die and the lenses in the current manner we use.

Please feel free to ask questions.

Read more…

3689386060?profile=original

I would like to "formally" announce two open source projects related to optical flow / programmable vision sensors. These are based on some of the optical flow techniques developed at Centeye, but in the spirit of "open source" are meant to be hacked/modified/copied any way a user deems fit. In both cases, the source code has been opened up under a modified FreeBSD license, while the board design has been released under a Creative Commons Attribution-ShareAlike license, the same license that applies to the Arduino boards.

The first project is the CYE8 sensor, an optical flow sensor based on an Atmel ATmega644 (possibly to be replaced by an ATmega1284) 8-bit processor, and using a Faraya64plus sensor head. (A "sensor head" is a vision chip wire bonded to a 9mm x 9mm PCB with a board to board connector on the other side.) We fabricated our first two iterations last year (the first one described here), and are now readying the third iteration this month.

The hardware of the second project was introduced in a recent blog post and is based on the Sparkfun Arduino Pro Mini platform (which uses a similar but smaller Atmel microcontroller) and comprises a simple shield board that interfaces the Arduino with a sensor head. Currently we have used only a FarayaSmall sensor head, but this board will support other chips as well.

At the current time, these projects are hosted on another open Ning network Embedded Eye, since we are trying to capture a broad set of applications beyond drones. If these projects take off, we can set up Huddle spaces accessible across both Ning networks, or move the project elsewhere. (I'll take suggestions- I'm still learning about how to do projects like this.) The CYE8 project is located here and the Arduino-based project is located here. These forum pages include initial board designs and source codes.

Based on interest, we will likely also launch projects built around an Atmel AVR32 processor (faster than the AVR8's) and/or an XMOS quad-core processor (if you have real need for speed).

One common theme of these two projects (which has strengths and weaknesses) is that they utilize vision / image sensor chips designed by Centeye. (This is not a requirement of the license- it is just how they were designed.) The strengths are that since we designed the vision chips, we can probably reveal as many details of the inner workings of these chips as we want. We all have heard complaints of chip manufacturers being too vague about what is inside, so I hope that this is a welcome change. The weakness is that we basically have to burn new wafers every time we want more sensors, so they are not as available as, say, a part from Digikey.

We are actually going to start a new run of silicon soon with the intent of increasing manufacturing quantity. It is tempting to use this as an opportunity to explore semi-open chips designs. I'd be happy to share a (virtual/real) beer with anyone interested in discussing (whether here, at EE, or directly) the various issues associated with the design and manufacture of chips of this type.

I look forward to speaking with everyone soon!

Geof

Read more…

Make your own plastic mini lens, part 2

3689384583?profile=original

In a recent post I described some simple acrylic lenses I made using a simple press-molding technique. The methods were crude, but the results weren't too bad. Also, I had designed an assembly containing four minor variations of these lenses and submitted that for fabrication over the holidays. The injection molding step was a bit of an experiment- rather than using a full optic-grade firm, which would have cost us well into the five figures to try, we used U.S. based Protomold, who was able to create this mold in two weeks and make 100 assemblies (400 lenses) for a bit more than $2k. I again selected acrylic as the resin material for these lenses.

The picture above shows the parts as they came back (top and bottom side). Below shows a close-up of two lenses cut out from the above assembly.

 

3689384623?profile=original

The real test, of course, is that image quality. I mounted these lenses onto some of our image sensor chips using the same methods as that discussed in the above-quoted recent post, painted on an iris, and sealed the chip up. Below is a picture of me waving at the camera in 32x32 resolution.

 

3689384524?profile=originalI also took another picture of my backyard with a different chip and a different setup at 90x90 resolution. The field of view was roughly between 70 and 80 degrees, thus the pixel pitch was less than one degree. The image quality in this latter picture was not as good. Two factors probably contributed to this- First the finer pixel pitch could have exceeded the limits of the optics, second my method for removing fixed pattern noise was less accurate in this setup. Right now I do not know which of these two factors dominate.

3689384598?profile=original

One comment- There was in fact some shrinkage in the lens, on the flat bottom part that gets placed onto the chip. However this was small and easily filled in with the optical adhesive, which has almost the same index of refraction as acrylic.

One lesson learned regarding the injection mold design: There are four slightly different lenses in the above assembly. The difference is in the total thickness, with sequential lenses different by 25 microns. (It turned out this difference was moot compared to the varying thickness due to the amount of adhesive used.) This was to allow me to experiment with variations to compensate factors such as shrinkage and enlarging of the mold through polishing. However I made the mold family perfectly symmetrical (other than the small variations in lens thickness)! When I got the parts back, it was hard to find out which lens was which! Fortunately I found the sprue (where the plastic charge gets injected into the mold) and with careful eyeballing under a microscope, identified the lenses. But the lesson learned is that I should have added a slight marking or asymmetry to help me identify right from left.

Overall I am pleased with the results. For pixel pitches of about two degrees per pixel and up, this technique is adequate. Two degrees per pixel may not sound like much, but many flying insects have this type of resolution and do quite well. It may be that with the right iris and better fixed pattern noise cancellation, I could get the sharpness down to one degree or pixel, but this will have to wait.

Here again is the link to a zip folder containing the Alibre files for the mold: CYE_LensMold_Untested.zip

Read more…