5334381292?profile=originalFigure 1: (left) Foam drone with optical flow sensor mounted under a wing, Summer 2001. (right) Foam drone with optical flow sensors for attempted obstacle avoidance, Summer 2002.

Today it is universally acknowledged that drones operating close to the ground need some sort of obstacle avoidance. What I am about to tell is the story of our own first attempt at putting obstacle avoidance on a drone, during the years 2001 through 2003, using neuromorphic artificial insect vision hardware. I enjoy telling this story, since the lessons learned run contrary to even current practice but, upon reflection, should be common sense. I learned an important lesson that is still relevant today: The best results are obtained with a systemic approach in which the drone and any obstacle avoidance hardware are holistically co-designed. This contrasts with the popular approach in which obstacle avoidance is a modular component that can be simply added to a drone.

Initial Successes with Altitude Hold

As discussed in my last post, by taking inspiration from the vision systems of insects, we implemented interesting vision-based behaviors on drones using only several hundred pixels. Nineteen years ago, in 2001, we built an optical flow sensor using a 24-pixel neuromorphic vision chip and a modest 8-bit microcontroller. We mounted the sensor on the bottom of one wing of a fixed-wing model airplane to view the ground (Figure 1 left, above). The sensor was programmed to measure front-to-back optical flow and from that estimate the airplane’s height above ground. We implemented a simple proportional control rule that would increase the aircraft’s throttle or elevator pitch as optical flow increased. The system worked quite well- once the aircraft was set to the target height, it would fly for as long as the battery lasted (and wind conditions permitted). The sensor only operated the throttle or elevator- the human operator would still steer the aircraft with the rudder via a standard RC control stick. We verified the ability of the sensor to ascend or descend gentle slopes. We even later achieved similar results over unbroken snow on a cloudy day!

You can see one of the videos here, which I shared previously: https://youtu.be/EkFh_2UX-Jw You’ll hear a tone in the video. Our telemetry was laughably simple- The sensor transmitted this tone by on-off keying an RF module at a rate that varied with optical flow. The camcorder operator carried a scanner (set to AM mode) that received and audibly outputted that tone, allowing it to be recorded with the video of the flight.

I look back fondly at the scrappiness of our approach! I personally designed the vision chip using free and open source tools (Magic and Spice) on a $1000 laptop running Linux. I then had the chip fabricated by the MOSIS service for $1200. Our most expensive piece of lab equipment was a PIC microcontroller emulator that I think cost me about $2500 at the time. I find it interesting that a “fabless semiconductor” startup today “needs” tens of millions of dollars to start, but that is a topic for another post. Going from the design of the vision chip (December 2000) to the first successful flight (July 2001) took eight months, much of it learning how to build and control the model airplane!

Failed Attempts at Obstacle Avoidance

After getting “altitude hold” under control, I decided to next tackle obstacle avoidance, again using optical flow. I drew inspiration from the work of noted biologist (and MacArthur grant winner) Michael Dickinson- His laboratory then recently recorded the 3D flight paths made by fruit flies in a chamber and then analyzed how the flight path turned in response to an obstacle, in this case the wall of the chamber (Figure 2). Not surprisingly, the flies tended to travel in straight lines until they were adequately close to the wall, at which point they would execute a sharp turn away. An analysis of the flight paths and the environment showed that the turn reliably happened whenever the optical flow due to the approaching wall crossed a threshold. The fly would then resume forward flight in the new direction until another part of the wall was reached.

5334381654?profile=originalFigure 2: 3D flight paths made by fruit flies in a confined arena. Apologies for the poor image quality.

Back in 2001, this was a behavior that I thought could be implemented with a few optical flow sensors. My first attempt had just two optical flow sensors mounted on the aircraft to view diagonally forward-left and forward-right. The controller was programmed to detect if the optical flow was high on one side, at which time the controller would turn the aircraft’s rudder to steer away for a fixed duration. Here is a video clip of an early test: https://youtu.be/Ao3rhiQR0BM In this case the tone indicated the rudder actuation- a middle pitch indicated neutral e.g. no actuation, while a high or low tone indicated an attempt by the rudder to turn one direction or the other. It was comical- the optical flow sensor picked up the tree and initiated a turn, but not soon enough to avoid a collision!

We spent almost two years trying to improve that demonstration. I designed improved neuromorphic vision chips (with 88 pixels), then we made lighter yet more powerful versions of the optical flow sensors. We even experimented with different optical flow algorithms by varying the firmware on the microcontroller while still using the same vision chip as a front-end. Figure 1, above right, shows one iteration. It had four sensors- one downward to control height and three forward to detect obstacles. We added a proper telemetry downlink that allowed us to record and display data such as optical flow measurements, aircraft yaw rate, and control responses. We still did not achieve improved obstacle avoidance. We did, however, produce a graphical display showing three beautifully hilarious sequential events: First the increased optical flow due to the looming tree, then the control response by the rudder, and finally a sharp spike in the yaw rate as the aircraft slammed into the tree! The only thing that was “improved” was the monetary value of the electronics we had to fish out of a tree after each crash…

Eureka!

It is human nature, when trying to solve a difficult problem, to go down the same general path and bang your head against a dead end until you accept that you are indeed at a dead end and need a different approach. In my case, the epiphany came when I took a second look at the flight paths recorded by Prof. Dickinson. I saw something that I had missed before. Sure, the flies flew a straight line and then made sharp turns upon detecting an obstacle. What I had missed, however, was the sharpness of the turn- the arc flown during the turn had a radius of a few centimeters. More significantly, the turn was made about ten centimeters away from the wall. I imagined a parameter “R”, which might be the “turning radius” of the insect (Figure 3). A “turning radius” is not a native measure of the maneuverability of a flying insect or drone in the same way that it is for a ground vehicle. However, it does serve as a first-order approximation. I then imagined a parameter “D”, which might be the size of an obstacle to avoid or the distance from an obstacle at which the drone or insect would turn to avoid it.

5334381678?profile=originalFigure 3: D/R ratio

I then realized that what is critical is the ratio of the two: D/R. This ratio is basically a normalization distance relative to maneuverability. In the fruit fly case, D/R was perhaps 10 cm divided by a few centimeters, or a value of around two to four. Now consider the aircraft we had been using- it was a foam “park flyer” type designed to be easy to fly and control, using only a rudder for steering, and thus by design had a limited maneuverability. Furthermore, we loaded it up with sensors, telemetry, and other support electronics which further reduced its ability to turn. It’s R value was perhaps 20 meters if not more! To achieve a similar D/R of two to four, it would need to start avoiding the tree at a distance of 40 to 80 meters or more. At that distance, the small tree we were trying to avoid was simply too small in the visual field for our sensors to detect. At that turning radius, our implementation was better suited to avoid a large building, mountain, or cliff, rather than a single tree.

I was eager to achieve a successful demonstration of obstacle avoidance. Our sponsor at the time (DARPA) made it clear they required it. When I had the insight of D/R, I realized what we needed to do: We needed to boost the D/R of our system, and the most direct way was to find ways to decrease R e.g. make the aircraft much more maneuverable. First, we found a way to shave a few grams off the sensor mass while modestly improving their performance. We then simplified the control and support electronics- We took out the telemetry downlink to a human operator, built a simpler controller board, and found lighter cabling. Finally, we scratch-built a new model aircraft from balsa wood, pine, and foam. The resulting aircraft (Figure 4) was smaller, at half the wingspan, and at least an order of magnitude lighter. It was ugly, inefficient, and made any aeronautical engineer who looked at it cringe. But it had what I, a chip designer by training but perhaps armed with a fundamental grasp of physics, realized would make the aircraft turn- a giant rudder!

5334381484?profile=originalFigure 4: Aircraft with improved D/R used to demonstrate obstacle avoidance

After honing the aircraft design, it took just a few weeks of tuning to get it to avoid a tree line. The result is this video that I shared previously: https://youtu.be/qxrM8KQlv-0

Lessons Learned

So, was it possible almost two decades ago to provide a drone with obstacle avoidance using the technology available then? Absolutely. In fact, we probably could have done this in 1990s or even in the 1980s if we knew then what we know now. But it would not have been accomplished merely by adding obstacle avoidance to the drone. It would have also required designing the drone to support obstacle avoidance.

There are lessons to be learned from that demo that I still carry with me. First, and most important, is that it is necessary to take a systemic approach when implementing obstacle avoidance on a drone. Such a systemic approach will yield insights that would be missed if you look at individual components. In my case from 17 years ago, taking a systemic approach gave me the insight that successful obstacle avoidance required increasing D/R, and the most direct route at the time was to decrease R rather than increase D, in other words make the platform more maneuverable rather than redesign the sensor.

A second lesson learned is that very often “less is more”. There is a benefit to simplicity and eliminating excess that is often lost in practice. I am reminded by a saying attributed to the great automatic engineer Colin Chapman of Lotus- more power makes a race car faster on straight roads while more lightness makes a race car faster everywhere! I think this is an easy lesson to understand, but tough to incorporate because it runs contrary to habits we have developed in society- Pedagogy and society both reward people for working excessively hard and going through the motions to implement a complex solution rather than taking time to identify and implement one that is simpler and more elegant.

I will discuss, in another post, observations and lessons learned from more recent work providing small drones with obstacle avoidance, including whether it is even feasible at the current time for a modular approach. For now, I am curious to learn if others have had similar experiences to the above. Thank You for reading!

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • I really enjoyed your write-up !

  • Using your own flight controller means you 'll have an uphill battle for adoption later on (one you will lose). The popular firmwares do what they do because that is what 99% of the users need (plus they already provide a modest window for what the bleeding edge needs, such as the modes I referred to above). If you need something extra and you are not covered by what it does off the shelf, you can either fork it and muck with it or spend the money you would to buy in-house devs to buy one of the project's devs to do the mucking about with it the way you want.

    If it works well, it will be integrated like all the little bits of the hardware ecosystem have, like the lidar rangefinders, the precision landing beacons and whatnot.

  • @Andreas, I agree, and we are on the same page. An issue that can be faced is that some see the off-board mode as undesirable. Apparently the off-board mode bypasses all sorts of safeguards that some understandably want to keep. Therefore any requirement to not use an off-board mode will conflict with the ability to implement vision-based autonomy or obstacle avoidance, for the exact reasons you mention above. I am using vague and muted language on purpose. :)

    For our own work we use our own autopilots with off-the-shelf FCUs- that seems to give us the greatest flexibility and the tightest integration. But since the industry is far from having obstacle avoidance systems being one-size-fits-all, it would be nice if autopilots like the PX4 provided a more sophisticated set of ways to implement off-board control. (It may be the case that it does now; I haven't looked at this since last year. Also I can understand that the PX4 team already has enough on their plate!)

  • Offboard control will have to become the norm for vision based systems. The (quite demanding) computational muscle dictates the sensory hardware for vision systems. It is a very good idea to separate the electronics at the board level and allow each design to focus on getting what it does as well as it can while establishing a well defined and flexible interface between the two. We have been seeing this with companion computers for quite a while now.

    Ideally, you want sensing and path planning in a powerful board that sends navigation commands to the autopilot (brain / brain stem kind of thing) and lets the autopilot deal with the details of getting the movement right.

    Since you refer to "Pixhawk", I presume the well-known open source autopilot software you used was the PX4. Unfortunately, I am not very familiar with it but it appears that it allows attitude requests via its Offboard mode. In Arducopter there is the Guided_NOGPS mode which is a step in the right direction for that kind of thing. To the best of my knowledge, there is nothing similar for Arduplane yet, however, the information necessary for requesting the autopilot to achieve certain attitude targets certainly exists, it is "simply" a case of exposing them via a suitably designed API.

    A fairly good example of all this is the newish fad of VTOL hybrid planish looking multirotors or multirotorish looking planes or whatever you want to call them. A few years back, they were barely on the scene, now suddenly all the Q-modes have exploded on the scene because the hardware became mainstream. Similarly, an aeon ago Ardupilot meant Arduplane, then suddenly overnight everyone was flying quads and Arducopter started trending.

    So, if you are working in this field, it would make a lot of sense to partner up with some of the experienced firmware devs and design some flight modes that will allow you to request whatever data or movement commands you want.

    Offboard · PX4 v1.9.0 User Guide
  • @Mateusz- Thank You for sharing it! I will check it out there...

  • @Andreas- You bring up some good points. I think that higher "D" avoidance systems are on the horizon, at least for larger drones, but once they are developed it will be quite awhile before they are plug and play. One of the points of my D/R analysis is that it is not (yet) possible to think of the drone and any obstacle avoidance technology as two blocks that are simply put together. The needs for any obstacle avoidance depend on not just the application and flight environment (both of which vary widely) but the drone itself. So part of successful obstacle avoidance will require designing a drone that can support it. Skydio has done this, whereas DJI has not.

    I understand the issue with the autopilot. I think the lack of any standard method of avoidance will hold things back in terms of open source autopilots for all except the most basic applications. I actually was involved in a project to try to incorporate our sensors on a well-known open source autopilot- it was a complete failure since the autopilot was not equipped to handle the type of information our sensors provide- either the "estimator" threw away our data (which was to be used for stability) or the stack lacked the support to take in our avoidance-type information.

    In some ways, I wonder if this presents an opportunity for hackers, makers, or developers closer to a drone application. A solution that will work for avoiding trees outside will be different from one that flies in a dark subterranean tunnel, likewise optimal solutions will vary depending on whether using a fixed-wing or rotary wing platform. Therefore each smaller group can develop the avoidance stack (hardware, perception software, control software) for each application. This will mean, though, that any open source autopilot will need to be modified for flexibility. For the Pixhawk, for example, this could mean embracing various levels of "offboard control" as a norm.

    I was planning to write about some of the above in a future post.

  • Thanks for the interesting write up Geoffrey! I'll share it in Weekly Robotics!

    Weekly Robotics Newsletter | Go, Robots!
    Weekly Robotics Newsletter - stay up to date on state of the art robotics, drones, space, open source projects and research.
  • Low R means either multirotors or specially designed planes. What if you forget about decreasing R? Are we anywhere near increasing D to the point where it might be usable to the typical unmodified plane (let's say close to 100 metres or even more but without losing the lower end of the scale)?

    Until this is doable, there is no point incorporating it at low level into the autopilot (at least for planes). The reason we have the autopilots designs we have today is because the firmware indeed coevolved with the sensors that have been available. If we can get some competent obstacle detection at this troublesome range (tens of metres to few hundreds of metres) then eventually, the autopilot firmwares will adapt.

    Right now, we have either the bouncy ones that bounce something off something and grab the echo (IR reflectance, sonar, light time-of-flight), which totally suck at distance (low D) hence are good only for multirotors or the vision based ones, that, well, they kind of suck. I mean, don't get me wrong, there is all sorts of awesome demos all over the place of this or that specific use case but we still haven't got a "killer app" that I am aware of. So, what do you think, can we actually get robust detection at mid-range D any time soon?

This reply was deleted.