I've been working on a new version of our ArduEye using one of our "Stonyman" image sensor chips and decided to see if I can grab four dimensions of optical flow (X shift, Y shift, curl, and divergence) from a wide field of view. I wirebonded a Stonyman chip to a 1" square breakout board, and attached it to an Arduino Mega256 using a simple connecting shield board. I then glued a simple flat printed pinhole onto the chip using (yay!) 5-minute model airplane epoxy. With a little black paint around the edges, the result is a simple low resolution very wide field of view camera that can operated using the Arduino.

3689437396?profile=original3689437324?profile=original3689437505?profile=original

I programmed the Arduino to grab five 8x8 pixel regions- region 0 is forward while the other four regions are about 50 degrees diagonally off forward as shown. In each region the Arduino computed X and Y optical flow and odometry (essentially an accumulation of optical flow over time).

To compute X and Y shift, the algorithm summed respectively the X and Y odometry measurements from the five pixel regions. These are the first two dimensions of optical flow that most people are familiar with. To compute curl and divergence, the algorithm added the appropriate X or Y odometries from the corresponding pixel regions. For curl this results in a measurement of how the sensor rotates around it's forward axis. For divergence this results in a measurement of motion parallel to the forward axis.

3689437341?profile=original

In the current configuration the system operates at about 5 to 6 Hz, though when the serial dump is on that slows to about 2 Hz. Most of the delay is in the acquisition and involves wasteful array lookups to select which pixels to read out. Using an external ADC (which the middle board supports) and better code there is room for probably an order of magnitude speed increase.

The video shows a few test runs where I exposed the sensor to three of the four fundamental motions. Y shift was implemented using an air track (like some of you used in physics class). Curl motion was implemented with the aid of a well-loved turntable. Divergence was implemented by hand by moving the sensor to and from clutter. The corresponding plots show the response of all four motions, with the "correct" one emphasized.

You can see that the four components are largely independent. There is some crosstalk- curl and divergence tend to be the biggest recipients of crosstalk since they are effectively a difference between signals (and getting an accurate number by subtracting two noisy numbers is not easy). Factors such as varying distances around the camera can cause uneven stimulation of the different pixel fields, resulting in phantom curl and div. There is also a little bit of drift. There is a lot of room for optimizing the system for sure.

One immediate improvement would be to use two of these Stonyman cameras back-to-back so that near omnidirectional sensing could be performed. This would give us more information to separate the different components (X,Y,curl,div) as well as allow us to separate out the other two axes of rotation from X and Y.

A setup similar to this formed the basis for our recent single sensor yaw and heave (height) stability sensor demonstration.

What could something like this be used for? You could put it on a ground vehicle and do some odometry with it, either looking down or even looking up, though for looking up the odometry measurements would depend on distance to other objects in the environment. You could also mount this on a quad looking down- X and Y would give your basic optical flow for sideways drift regulation. Curl give you yaw rotation (though you already have that with a gyro). Divergence is most interesting- it would tell you about change in height.

You could also implement something similar with five of Randy's optical flow sensors aimed to look in the same five directions. (You could probably dispense with sensor 0 to save weight/cost in this case.)

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Roy,

    Short answer to your question: Yes. For discussion let's assume that this sensor is on a quad pointed downwards. Curl measures yaw motion, while divergence measures change in height. X drift and Y drift, in this current setup, are affected by both lateral drift (X and Y directions) and pitch and roll rates. The current setup does not distinguish between the two. IOW if the quad pitched forward, the resulting optical flow would be similar to the quad drifting backwards. (This can be destabilizing when put into a PID loop.)

    It is possible to separate out say X motion and roll rate, since you have the downward optical flow as well as downward to the side. However the motions will be very similar and thus tough to measure accurately, unless you have the optical flow measurements themselves accurate enough. We did something like this to do 4DOF control on our helicopter when using one sensor (haven't posted that yet but it would be similar to this post), where we had to separate out yaw motion from lateral drift.

    If you managed to increase the number of optical flow measurements to a full 360 deg (say using two sensors) or used a gyro to separate out the change in pose, then you will have a better drift measurement.

    -Geof

  • Great stuff, Geoff!

    So, now you have 4 of the 6 DOF. Can you use some sort of trapezoidal distortion to estimate pitch & roll attitudes or rates? What are the next highest dimensions of optical flow?

    - Roy

  • Oops! Fixed it. Sorry about that.

  • can't see the vdo. "It's private"

This reply was deleted.