Curt Olson's Posts (11)

Sort by

Senior Telemaster Autonomous Take Off

This is probably basic/simple stuff, but here's a quick (raw) video of a fully autonomous wheeled take off in a Senior Telemaster.  The basic strategy used so far is to steer with the rudder to hold heading error to zero.  Then when airborne, use a small amount of gentle bank to steer the heading error to zero.  My roll gains could be better, but they finished the day much improved over the start (and hopefully will continue to improve as I have a chance to review the flight data.)  There is some lateral drift as you can see, but probably on par with the performance of an "average" RC pilot, and on this day there was some moderate cross wind component to deal with. 

The autopilot system we are developing uses the APM2 for sensors and then a gumstix overo for all the heavy lifting and computation.  I have a series of blog postings starting here that show a bit more about the system.

Read more…

UAS Search and Rescue Demo (Simulation)

3689425867?profile=originalSimulators can be powerful tools for developing and testing UAS flight control systems as well as prototyping new ideas.  Here is an interactive FlightGear based demonstration showing:

  • Auto-launch from a carrier.
  • Circle holds.
  • Route following.
  • Gyro stabilized camera.
  • Simulated search and rescue mission
  • Auto-approach and landing on a carrier (factoring in winds and even carrier motion.)


To get started, go to the following link and read through the installation and operating instructions:


The flight computer flies everything from start to finish. Your job as UAS operator is to give mission commands and operate the gyro camera (and enjoy some of the other views as well.)

This demonstration can be flown with the FlightGear multiplayer system enabled so you might see other UAV's or aircraft in the airspace.  I enjoy enabling live METAR weather and flying at different times of day and turning on moderate turbulence to give the flight control system an extra workout.  In low visibility the search and rescue portion of the mission can be very difficult if not impossible.  Sometimes you get lucky, some times you don't.


Just to take a step back here.  The point of this demonstration is to show:

  • The power and realism available in flight simulators.
  • The ability to script complex flight control logic using FlightGear's built in scripting system.
  • This entire demo is created with a stock version of FlightGear.  You don't need external hardware, you don't have to fiddle with complicated communication protocols, you can do everything inside of FlightGear first -- on a single PC.
  • Auto-landing tasks can be very complicated if you wish to factor in wind and fly a stable and optimal approach.  Adding multiple entities in a multiplayer simulation can expose the need to design an approach that ensures traffic separation.  Flying the logic over and over in simulation under a variety of conditions helps you spot situations you might not have otherwise accounted for and improve and refine the code in ways that would be much more difficult to do in real life.
  • All of the flight control in this demonstration is done without "cheating".  By this I mean all the sensor inputs to the flight controller are the same sorts of inputs a small embedded autopilot could have.  The autopilot only manipulates the control actuators.  After that we let the flight dynamics do whatever they are going to do.  We sense, we actuate, just like in real life.



I'm calling this demo a "BETA" so I'm interested in feedback ... especially feedback on the instructions at the webpage link.

Be very careful though; you might end up having fun and wasting a lot of time with this.  I know I have!  :-)


Read more…


Recently I purchased an inexpensive usb based oscilloscope to help me do some low level electronics debugging.  Owning an oscilloscope is so cool I just had to tell everyone about it!  And the price of these USB-based oscilloscopes are in range of many hobbiests.  Now I'm kicking myself for not buying one of these years ago.

The above screen shot shows the output of an ATTINY13A microcontroller (similar to what the ardupilot uses for their fail safe / manual override circuit.)  My test code reads a PWM signal in (via an interrupt service routine) and generates a new signal out on a different pin that mirrors the input signal.  I can see exactly what is going on with both the input and the output pins using my cool new oscope.

I'm learning as I go (just like everyone else) but I thought it would be fun to share my experience in a bit more detail and hopefully encourage some people here to also take the plunge.  I have many more details and screen shots posted at my personal blog here:


Read more…

Tracking Ocean Debris in the North Pacific

This is a movie showing the drift pattern of some ocean debris (a large houser line.) We were out 1000nm north of Hawaii on a NOAA research ship. This cruise was part of a project that has also provided a small amount of funding to develop a "marinized" small UAV that could be deployed from a variety of ship sizes with minimal infrastructure requirements.
We found the houser line on April 2, 2008 and attached a tracking buoy to it. The buoy is still reporting it's position twice daily and it's really fascinating to see a map of it's voyage for the past THREE years.
If anyone is interested in this stuff I posted a longer entry on my personal blog here:
Read more…

Shadow Cam

Shadow Cam.

This is a little proof of concept video I just put together. The goal is to always keep my aircraft's shadow in the field of view.

Equipment: Senior Telemaster. Fly-Cam-One-3 with built in pan/tilt. Sparkfun 6DOFv4 IMU (it was laying around so I used it.) Gumstix flight computer. Ardupilot used for controlling pan/tilt servos on the camera.

The flight is 100% manually piloted. Camera is 100% automatically pointed.

On board I am running a 15-state kalman filter for attitude estimation. The filter converges to "true" yaw angle independent of ground track, wind, and magnetometer. This is actually critical for camera pointing.

On the ground I have small app I whipped together Thursday evening that computes the sun location for the current time in ECEF coordinates. Then converts the sun "vector" to NED coordinates based on a GPS connected to the ground station (assuming I'm not ranging very far from my start point.) The code computes a new sun angle every minute. Finally, the sun vector is inverted to get a shadow vector and that is sent up to the aircraft as it's target point vector (once a minute.)

Notice: sun vector * [ -1 -1 -1 ] = shadow vector.
Also: sun vector * [ 1 1 -1 ] = reflection vector (where we would be looking at the suns reflection off surface water.)
Also: sun vector * [ 1 1 1 ] = rainbow vector if we would happen to fly above the clouds (this would keep us looking at the center of a rainbow circle/arc.) :-)

In order to run myself over with the shadow from the aircraft's perspective I need to fly the airplane through the sun from the ground pilot's perspective.

Disclaimers: this was my first time out trying something like this so the video is rough. The pan/tilt on the flycam is very low grade, but works well enough for a demo. I'm also flying with an IMU that is about 2 orders of magnitude coarser than I'm used to flying with, so that degrades my attitude estimation more than I would have liked (but the filter still converges.) I put very little effort into aligning the IMU mount with the camera mount, so there are certainly a few degrees of bias just from mounting issues. Finally, I only eyeballed the mapping between servo commands and pan/tilt angles so I'm in the ball park, but there are certainly errors there too. It's a hack, but made for a fun afternoon. :-)
Read more…

Command Augmentation System

I've been doing some more work on my CAS system. CAS stands for "Command Augmentation System" and implies the pilot stick inputs do not directly map to control surface deflections. Instead the pilot stick inputs command a roll/pitch request and the flight computer does it's best to match up. The video (hopefully) can do a much better job of explaining the system than words.
More details and additional "real" video footage of the system in action are here:
Read more…

Stability Augmentation System

I just added a simple stability augmentation system (SAS) to my autopilot code. I am posting a video of the very first test flight. As you can see if you watch it, I had some success, and there were a few things to tweak for the next flight.

I've posted a few more details and explanations here:

This is my first crack at an SAS, and it is very simplistic. The pilot stick inputs map to roll and pitch *rates*. When the pilot centers the stick, the SAS holds the current target roll and pitch angles as best as it can. For now I've let the pilot have direct control over the throttle and rudder while the SAS is active. This allows the pilot to taxi on the ground to set up a take off. It allows the pilot to steer with the rudder while the SAS holds the wings level. I'm not convinced this is the best arrangement, but for a first crack it certainly is workable. By my third flight I was able to taxi to the end of the runway, do a full wheel take off, fly a complete flight, and finish with a smooth landing -- with the SAS turned on from start to finish.

I have a few questions though.

Is it fair to call what I've put together an "SAS" or is there a better term I should be using? I hate playing acronym of the day though. In this field everyone likes to make up their own variants anyway. I've set up my system so that stick deflections map to roll and pitch rates and then the system holds the current bank and pitch angles when the stick is centered. This works pretty intuitively as long as you recognize and account for some latency. But how have other people rigged up their systems? What do people feel works the best and is most intuitive from a pilot's perspective? What is the ardupilot mega doing?


Read more…

Elevator Gain Tuning

One of the most challenging aspects of autopilot setup is tuning the gains for a particular airframe. When the gains are tuned poorly, the aircraft my oscillate excessively, it may lag way behind the target pitch angle or roll angle or velocity, it may never reach the target values. Poorly tuned gains could destroy an airframe in a worst case scenario, but often people just live with non-optimal gains that aren't great but work well enough to get the aircraft around the sky. It's hard to know what gains to tune and why and a person could play with the numbers all day and only manage to make things worse. It's easy to spot a problem; often the aircraft will look like it is fighting itself even though it does make it's way to where it should be, or it just may not do what you ask it to do.The first half of the attached movie shows an example of badly tuned elevator gains, I make some simple improvements and then show the results. It's a windy day so don't expect perfection, but at least from the ground it looked rock solid.I have some more details and a link to PID controller tutorial I wrote a few years ago. Included in the tutorial are some different strategies and tips for tuning PID controllers.
Read more…

I just thought I'd share a quick video (and picture.) This is a movie I made of a replay of a real flight a couple days ago. I collected the data remotely while the flight was in progress via a maxstream radio modem connection between the aircraft and the ground station. I saved the flight data on the ground station and then can replay it later in FlightGear. The display is a "glass cockpit" style display I'm just beginning to develop.

Disclaimer: I have really noisy inertial sensors rigged up at the moment, 10 bit ADC's, you'd laugh if I described my IMU calibration procedure, probably very non-optimally tuned kalman filter for these sensors, windy day (yeah that's it, blame it all on the wind) :-)

But the really cool stuff (I think) if you can look past some of the less smooth flying, is to see all the elements of a very dynamic system playing together in one view.

For instance: the altitude tape shows the current altitude and the target altitude (via a magenta altitude bug.) If you are below the target altitude, the VSI will show a target climb rate (or decent rate if you are too high.) The autopilot tries to match the target rate of climb by manipulating pitch. The green vbars show the target pitch angle. The yellow bird shows the actual pitch angle. The autopilot manipulates the elevator to try to achieve the target pitch angle ... and you can see all these elements working together to achieve the goal.

There is a similar process with route navigation. The system computes a target ground track heading using wgs84 based math. The target heading is marked by a magenta heading bug on the horizon heading tape. There is a white "V" indicator that floats on the horizon heading tape which indicates the ground track heading. The actual heading shown is "true" heading as computed by a 15-state kalman filter. (The filter converges to true heading, independent of wind, side slip, etc.) So the autopilot computes a target roll angle to try to line the white "V" ground track indicator up with the magenta heading bug. And finally the ailerons are manipulated to try to match the target roll angle. If you watch the video you can probably see that I need to increase the gains a bit on my ailerons (maybe the end point limits as well.) What do you think? You can see the actual roll angle often lags pretty far behind the target, and this leads to some excessive serpentining as the aircraft flies towards the target. But if I dial up the gains too much, I may start over reacting to my filter's attitude estimate correction jumps. I think there is a balancing act that needs to be made between tracking your targets quickly and accurately versus slowing things down a bit to help smooth out the flight.

Finally you can also watch airspeed. Right now the autopilot is configured to try to match the target airspeed by manipulating the throttle, so you can watch the throttle move up and down to try match airspeed. Of course as you fly the course and bank into turns, encounter up and down drafts, and work around filter estimation errors everything is changing all at once.

On the one hand, I would like to see much smoother and tighter control, but on the other hand I have to sit back in a bit of wonderment just watching all the pieces working together and doing what they are supposed to do.

I think this is evolving into a really powerful system for evaluating how well an autopilot system is tuned and how well it is tracking it's targets. If your PID gains are inducing oscillations, you can quickly see that. If the PID gains are too low, you can see the system react too slowly. You can see your controls throws max out at their preset limits (or not if that is the case.)

And for what it's worth, this display can also be fed from real-time telemetry data so you could optionally have this running during the flight ... I'm not sure why you'd want it ... maybe during the design and development phase or to impress your wife or girl friend.
Read more…

Ardupilot based 4 channel servo subsystem.

I've mention this project a couple times and finally, here it is.

What: This is an ardupilot board that has been modified as per the picture above. The modifications allow reading the position of 4 channels from your receiver and driving 4 output servos. The firmware has been altered to remove support for gps and instead use the one ATMega328 serial port to write out the receiver channel data and read in servo position commands from an external computer.

Why: I am working on a gumstix based autopilot. The gumstix runs all the "smarts" but I need a simple way to decode receiver channels and drive servos, and I need a manual override safety switch. The ardupilot provides both of these.

I'm no master solder expert, but I managed to avoid soldering my fingers to the board and everything works, so I'm happy.

I am attaching my firmware mods (based on the Ardupilot-2.2.3 firmware with lots of hacking and chopping.) My current incantation only does servos, but I tried to leave the door open to attaching other analog sensors which could then also be reported to the host computer ... this could include pressure sensors, voltage sensors, etc.

I ran into problems using pulseIn() to read the receiver pulses (which made sense after I looked at what pulseIn() is actually doing) so I wrote my own routine that watches all 4 channels at once and times the pulse. I haven't convinced myself that I have the overall board timing locked down exactly the way I want it, but for now it seems to work pretty well. I can read 4 channels of data off my receiver (+ the selection channel state) and I can drive 4 servos from the host computer (tested with a simple sine wave function.)

On the subject of hardware in the loop testing, a board mod like the one described here could be used to read the servo outputs of a standard ardupilot, and feed them to a simulator to be translated into control surface commands there. Some additional software/communication work would be required, but it should be a reasonably straight forward project.

Read more…
I'm almost embarrassed to post this video. It was done almost entirely with open-source software, so it's obviously way behind what people are doing with commercial software and commercial systems. But I have fun and entertain myself with this stuff, so I thought I'd share a brief snippet. you are seeing is FlightGear ( I have a FlightGear model of a Sig Rascal 110 (which I've flown in real life many times.) The 3d model and the flight dynamics model are also open-source so of course are also subpar from anything that would be done commercially. The FlightGear flight dynamics engine is outputting gyro, accelerometer, and gps data to an external embedded computer running MicoGear. MicroGear takes the "sensor" data, runs it through a 15 state kalman filter (the one piece here that isn't open-source) and estimates roll, pitch, yaw, and location.Because this is FlightGear I already know the true pitch, roll, and yaw, but I promise I'm not cheating here. The kalman filter on the embedded board is estimating these values and using them as input to the autopilot and routing algorithms also running on the embedded board. The only difference between this and real life from the microgear/embedded-processor point of view is that it's not getting it's data from the onboard sensors and it's not driving servos directly. Instead it sends the servo commands back to FlightGear and the control surfaces are moved over there in the simulator.This turns out to be a really nice hardware in the loop testing platform and many subtle issues that show up in real life, also show up in the simulator. Because this is FlightGear, a person can throw a variety of weather conditions at the system, turbulence, high winds, etc. You can disable fuel consumption and run for hours or days if you like.I am very tempted to turn in a "virtual" entry into one of the upcoming DIY drones contests. :-) Oops, I almost started to crack a smile there. Ok, I'm back to my long, sullen, down trodden facial expression, because after all this is all pretty much all open-source software -- which just sucks -- total crap I know -- sorry to even waste your time and bandwidth with any of this guys. I won't post anything ever again until I have something actually useful or interesting to show.Curt.
Read more…