Based on the sample data provided by Loris I have written some code to produce an Allan Variance plot of the data. The program source code is ISO C++98, and should compile with any standards-compliant compiler. G++ 4.4 tested, I expect MSVC 2k8 or higher to also handle the file just fine.

This particular program is somewhat specific to the data provided by Loris. It expects 16x oversampled sums, in integer ADC counts. The code performs the floating-point division back to native counts itself.  Just compile and link the program in one command:
$ gcc -o allan_variance allan_variance.cpp

To run the program, provide it with a data file and the sampling interval. The data file must be provided as a list of whitespace-separated ADC samples, with six channels per entry. The first three channels are treated as x, y, and z axis gyros, and the remaining three channels are ignored.

Prior to actually using the data file, I massaged it somewhat. The original file has some weird line endings, something like \r\r\n. So I piped it through dos2unix and removed the blank lines with sed.



Over the entire sample period, the noise is dominated by white noise in the sensor, not by bias drift. I expect that the bias drift won't show up until the data stream is much longer, on the order of half an hour or so.  Gnumeric was used to produce the plot.

Views: 4358

Attachments:

Reply to This

Replies to This Discussion

Hi,

You made a great job! Do you plot the whole file? Because there is around 6000 samples so at 150Hz it should last 40s.

Could you try with the attached data file? It is in presence of noise.

Could you also try with right shifting (instead of flotting point division). Because what I noticed is the more you right shift, the lower the variance is.


Loris
Attachments:
Right shifts are divisions, they are just lossy ones.

In order to reliably compute the Allan Variance at some averaging time t, you need at least 6-7 times t samples. I picked 7. 40 seconds of data gets you roughly 6 seconds of Allan Variance data.
Right shifts are divisions but they remove some noise.

To increase to resolution by 2 bits (without adding noise) you need to take 16 samples and right shift 2 times (otherwise you increase the level of noise).
It doesn't change the conclusion. The gyro bias is not shifting randomly over the course of the sample period.
Jonathan,

I assume your data was gathered with the IMU sitting on "the bench". I wonder what differences would be seen during "in flight" conditions. Vibration and temperature changes may change the characteristics such that you do see bias drift.

I don't know that this is the case, but it is worth checking out. There is no sense optimizing code for performance under ideal conditions if the assumptions don't hold in actual use.

Best Regards,
Doug
Agreed. However, to compute the Allan Variance, the gyro must be at rest. The Allan Variance method will tell you what the bias stability will be after all other factors have been accounted for. It represents the best that you can possibly get out of the gyro, and serves to establish an upper bound on it's performance.

The bias sensitivity to temperature should be easily measurable on the bench. It should be more or less constant for a given temperature. While the IMU gyros don't have temperature sensors on-board, you should be able to use the CPU's temperature sensor as a reasonable proxy after the system has reached thermal equilibrium.

Anecdotally, the MicroPilot (based on older Analog Devices gyros) would suddenly and severely diverge once you started the engine if the engine and autopilot were coupled to each other. We were running a glow engine at the time. Mounting the autopilot board in a heavy metal case that was mounted to the aircraft frame using soft rubber grommets was required, in addition to a vibration-damping engine mount.
Well, we already know that the bias does drift with temperature and from my understanding it is worthless to try to use a temperature compensation table or other temperature compensation scheme without having on board temp sensors. Also, I have observed vibration sensitivity with this IMU, although my observation is more anecdotal than quantitative, and I suspect relates as much or more to the accelerometers than the gyros.

However, we have had pretty good success relying on DCM to take care of drift in the gyros.

I am not as versed in statistics and numerical methods as you and Loris appear. A little explanation for the common man of the mathematical points you are trying to make would be appreciated. Saves me a lot of research time.
Hi Doug..as a common man I found this very readable..lol..:

http://www.xbow.com/support/Support_pdf_files/Bias_Stability_Measur...
Here is original website with history and details of Dr.Allan & his theory
You do have an onboard temperature sensor - on the CPU itself. Wearing my mechanical engineering hat, once the system has reached thermal equilibrium (should only take a few min), the difference between the gyro and the cpu temperature should be a constant, regardless of the ambient temperature.

Q_part = UA_part*(T_part - T_ambient),

where Q is thermal power (determined by the current consumption and voltage), UA are the surface area and heat transfer coefficients for the part, T_part and T_ambient are temperatures. UA is driven by the physical shape and size of the parts, and the airflow over them. It should be a constant (or nearly so) over the course of a flight, so long as you don't have a stream of air flowing directly over the gyro in the cabin. T_ambient is common to all of the parts. Q should be nearly constant for a given part over most of its temperature range. That won't strictly be true, but should be close enough for these IC's. Therefore, the differential temperature of the part above ambient temperature is a constant, once the part has reached thermal equilibrium. Even though that temperature difference will vary from one component to the next, it means that the temperature difference between each part on the board is also a constant. Therefore, a lookup table that uses the CPU's onboard temperature sensor will make a reasonable proxy to calibrate out some of the bias drift sensitivity to temperature.

Now, with the hardware we have available, it probably isn't reasonable to try to calibrate out the rate sensitivity to temperature, only the bias. You would need a high precision turntable to calibrate the rate sensitivity.

Also, the bias sensitivity will vary from one part to the next. I can see DIYDrones charging a small nominal fee for performing that calibration as a premium add-on, since it has to be performed for each part and is time consuming.
Too many assumptions I think....

Ambient temperature in the cabin will not be constant. Mission time may be insufficient for anything to reach equalibrium. Thermal generators inside the cabin such as the ESC will have varying loads with the mission and what is going on within it. Solar loading will change with heading and bank, etc, etc, etc.

If we were designing the system for a single airframe we could work around most of these issues. However, that is not at all our situation. Others who have looked at the temperature compensation problem have said it is not realistic without gyros with on board temp measurement, and I can easily believe that.
Have you tried with the data I made in noisy conditions? I don't have all the softwares to run your code.

I agree with Doug, it is useless to compensate the temperature drift. (0.03% /°C of sensitivity change, according to the datasheet).

I think the main problem is to reduce the noise to have a reliable estimation at short term. As you can see on this graph, in noisy conditions the estimation is bad. (the IMU was not moving in this test, only the motor speed).

RSS

© 2014   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service