I've been thinking a lot lately about how cool it would be to implement inertial dead reckoning on ArduCopter for a gps-free loiter mode of sorts. I just wanted to get some discussion going from people who have studied this or have tried implementing it, as I know it is difficult.

To begin with, this is a good read about getting position from accelerometer data: http://perso-etis.ensea.fr/~pierandr/cours/M1_SIC/AN3397.pdf

So my thoughts are that we could integrate the acceleration reading from the accel, subtracting out the acceleration due to tilt angle (we can do this since we have a nice filtered angle reference, or gyro integration). This, once filtered nicely, should give us a pretty good reading of lateral velocity. Once that was accomplished, we do one more integration, and obtain lateral position. Since all we really need for position hold is a relative position, we could experiement with resetting the position integration at some time interval (or every time the sticks are released) to keep our drift error down. How cool would this be if it actually worked?!!

I am thinking to buy an ArduIMU v3 just to play around with this to see if I can get a robust measurement.

Please chime in on this! If this worked even remotely well, there may no longer be a need for optical flow, or even sonar (as we could get relative Z position too)!

-Jamie

• Developer

I've been spending a lot of time on this recently trying to use accels for altitude hold in ArduCopter with some mixed results.  Above is from a bench test where I lifted the copter off the ground by 1meter then raised it above my head 5 times..then i leaned it back 40 degrees and lifted it above my head 3 more times.  You can see the baro and baro+accel altitudes in the top graph and they're very close but note that the accel graph is ahead of the baro graph .. it's hard to see but it's 400ms ahead.  The accel based velocities in the 2nd graph above.

Sadly below you will see the results in flight.  In this test I just tried to fly the copter level for 30 seconds or so.  The baro altitude is a more accurate representation of how it flew.  You can see that the accel base altitudes are up and down.  This is because the corrections applied back to the accelerometer are constantly overshooting it seems.

The method is...firstly I've been using the gauss-newton method to calculate the accelerometer scaling and offsets.  This is working pretty well and in simple tests performed after the calibaration I can see no matter which way I hold the copter (if it's not moving), I see gravity as 9.78 ~ 9.84 (ideally it should be 9.81) so less than 0.5% error.

Then based upon the work that Jason Short did I convert body-frame accelerometer values to earth frame in a lib called AP_InertialNav and then these plus the latest baro readings are passed to a 3rd order complementary filter based upon work by Jonathan Challinger.

I believe I'm getting close but any advice would be greatly appreciated as I feel this last hurdle could be tough to get over.  All my code can be seen in the rmackay999-wip2 clone's accel_calib branch.

• Some time ago I tried to integrate acceleration in my AutoPitLot onboard computer. Example results of first integration (speed) from hexacopter flight is on the picture as a XY graphs: red - XY plane, green - XZ plane and blue - YZ plane. Despite calibration and thermal compensation it is not easy to do it well.

• T3

Hi Jamie,

Hobby grade gyros and accelerometers have enough bias and drift to require drift compensation. The situation is more difficult with accelerometers than for gyros, since their signals have to be integrated twice to get position, whereas gyros have to be integrated only once to get attitude. Also, accelerometers measure acceleration minus gravity, so you have to account for gravity.

Because you have to integrate accelerometer signals twice, any residual bias causes a position error that grows with the square of time, so it does not take long before uncompensated dead reckoning based on accelerometers becomes unusable.

There is another technique for long-term dead reckoning without GPS that does not rely so much on the accelerometers, that is based on airspeed and windspeed, if you have a pitot tube and a magnetometer. In that case, the position error grows only linearly with time. With that technique, you have to measure windspeed somehow, and that requires GPS. But if GPS fails after you measure windspeed, you can proceed without GPS, using the last measured value of windspeed.

Short term dead reckoning is performed by MatrixPilot. MatrixPilot is described here and here. The dead-reckoning technique that MatrixPilot uses is described here, and the accelerometer bias compensation is described here and here.

Best regards,

Bill Premerlani

• What Curt said is right. I have seen differences in accels of ~ 3-4% based on initial motionless offset calibration. To get distance from accelerometer data you need to double integrate. First to get velocity and second to get distance. It might be useful in the very short tern, but it would still need an anchor to reality, ie gps or other(optical, sonar) data.

At minimum you would need to get offset scale calibration to adjust all the accels to the same values. I haven't tried, but I suspect the noise will overwhelm the double integration and not be a reliable solution.

• Here are some random thoughts and observations.  With gyros it is really easy to zero out the bias on startup by keeping the sensor motionless for a few seconds, average the gyro readings and then call that your zero point (and subtract those values off future readings.)  For accelerometers I see 2 challenges (1) they seem to be much noisier which leads to faster drift compared to gyros (at least based on my MPU-6000 observations.)  and (2) any calibration procedure to zero out the bias and estimate the scale error is much more involved and subject to end user error.  I think it would be pretty difficult to do a good field calibration once the sensor is installed in an air vehicle.  So my point is, for acclerometers, not are you fighting the natural drift, you are also very sensitive to bias and scale errors.  The APM2 that I received was pretty good out of the box in the X axis, but Y was off by about 0.3 m/s^2 with a 98% scale factor error, and Z was off by about 0.6-0.7 m/s^2 with a 90-95% scale factor error.  Also keep in mind that while the MPU-6000 is pretty stable, it is not temperature calibrated so these errors can change as your electronics heat up or as you move from one environment to another.

One way or another you'll need to come up with some absolute position reference to account for drift, and hopefully do some on the fly bias & scale correction to help minimize drift and calibration errors.

• If I remember correctly, the drift problem is already solved (but not completely eliminated). UPenn uses inertial measurements to do indoor mapping with a Kinect.

I think this should work, assuming that you can get accurate X, Y, and Z measurements by combining the IMU data. The integration part should be fairly easy since it is all software based. No need to complicated formulas and etc.

• The best navigation grade IMUs that money can buy today have a drift of over 1m per minute if left uncorrected.  Making an IMU that can estimate anything useful enough for position hold without correction using cheap MEMs sensors along with the level of calibration that can be achieved outside of a lab is not possible - yet.