Hi all,

  I have recently started writing a series of blog posts that detail how I have chosen to program my own IMU complimentary filter.  I have done this for my own entertainment and as a learning opportunity.  Hopefully others will find this interesting.

 Here's basics of what this complimentary has to offer:

  1. Quaternion based attitude estimate
  2. Never requires re-orthogonalization unlike DCM or any other 3x3 rotation matrix solution
  3. Almost never requires re-normalization of the attitude estimate Quaternion
  4. Requires no trigonometric functions or floating point division, so it runs fast.  Can process 6dof samples at 500Hz on an Arduino if non-IMU functions are light.  Functions nicely at lower sample rates (~100Hz) if processing time is needed for other operations.
  5. No gimbal lock in any orientation
  6. Fully 3D capable after initialization.  Will properly counter attitude estimate drift and error regardless of the amount of time in any orientation.

The current posts cover the following:

Future posts will build on the previous posts while examining the following:

  • Magnetometer usage and implementation
  • Example code for my 9dof complimentary filter
  • GPS and Baro implementation with simple velocity and position estimation
  • Example code for my 10dof+GPS complimentary filter
  • Aircraft control based on quaternion data
  • Anything else that strikes my interest... or is requested ;-)

  My goals have been to write a filter that runs fast, behaves nicely in any orientation, and can be used to in my own multi-copter control software.  So far I am happy.  This is not intended to be the "best" complimentary filter in any way, just me having fun writing code.  Nothing I have or will present is ground breaking, although I haven't seen any implementations quite like mine.  I hope this is valuable to someone.



You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –


  • For those interested in seeing how I use a magnetometer to counter yaw drift in my complimentary filter, I have posted another blog post talking about it:

    Quaternion IMU Drift Compensation: Magnetometer

    This method of yaw drift compensation allows for corrections regardless of body orientation.  Some other light weight complimentary filters only provide correction when near level.  

    I will be releasing a library in the near future that should make it simple for anyone to add this to any project.

    Quaternion IMU Drift Compensation: Magnetometer
    A blog about technology focusing on multicopters, my quaternion based IMU, and any thing else that catches my interest.
  • Yeah I didn't seen an obvious advantage either, but also didn't even attempt to simplify his matrix rotation - but I thought it was interesting, mostly because I couldn't find any other references to it. 

    You don't find you need to re-normalize often? My simulation started to show degraded quats fairly quickly (small number of seconds). I'm implementing on an arm based FC, so I just have it do an approximate normalization every 100 loops, which I think is a pretty standard optimization for normalizing unit quats. It basically never hits the final else. I'm pretty sure the error accumulation is driven by the small angle approximation.

    void FastNormQ(const q4_t *a, q4_t *o)
      q4_t r;
      float m2 = Mag2Q(a); // mag squared
      float e = fabsf(1.0f - m2);

      if (e < 0.00001f) // close enough
        *o = *a;
      else if (e < 0.005f) // Works best when quat is already almost normalized
        float f = (3.0f - m2) / 2.0f;
        o->w = a->w*f;
        o->x = a->x*f;
        o->y = a->y*f;
        o->z = a->z*f;
        MulQF(a, 1.0f / sqrtf(m2), &r);

    • Next I will write a test that rotates a vector away and then back again.  I should be able to compare the rotated vector to the original and see how it is effected by the distorted quaternion.

      I've gotta run right now, but maybe later tonight.

      • Minor change of heart...

        After thinking about more,  I'll probably normalize the quaternion once per second.  The cost is low using my optimized inverse square root function and I have extra cycles to burn.  Might as well minimize error if my processor is standing around waiting for things to do.

        • I agree. I have it check it once a second and it's usually fine, when not it's close enough that the normalize approximation I posted above works fine and it never does the square root. Is you're 1/sqrt() faster than taking path 2 in the FastNorm function above? Maybe I should use that...

          • Your path 2 is faster (116us vs. 144us for the complete normalization operation) and more accurate for anything that is very near 1.0

            I used to have an InvSqrtFast function that was identical to your path2 in my 3D math header.  I will add it back in.  Not really sure why I removed it...

            The InvSqrt function in my header is fast compared to a "real" inverse square root if you can't assume the vector is near 1.0, clearly this should never be the case for an orientation quaternion.

      • Ok, I ran a test where I used the degraded Quat to rotate a vector out and back.  Every 1000 cycles I checked the magnitude of the rotated vector and the error in direction.  I have attached the graph of the data out to 300k cycles (also Arduino program and excel file).  The same unit vector was used for each test so the error shown is not cumulative.


        1.) The error shown includes two rotations (out and back).  In cases where only one rotation is performed, the error would be less (maybe/probably half?).

        2.) The larger the magnitude error of the quat, the larger the potential error in the rotated vector.

        3.) The max potential vector magnitude error is 8x the Quat magnitude error.  When viewing the data, it is obvious that not many samples reached this max.

        4.) The max potential vector direction error is 4x the Quat magnitude error.  Again, not many samples reached this upper limit.

        5.) Even after 300k cycles (12.5minutes for my aircraft), the max vector directional error is under 0.5% and the max magnitude error is under 1%.  This is certainly significant, but not catastrophic.

        6.) The rate of quaternion magnitude change is loosely related to magnitude of the rotation vector used in integration.  Smaller rotation rates generally reduce the rate at which the magnitude of the quaternion deviates.  Given this, the optimal normalization period may vary based on the expected body movement.

        Conclusion:  I will probably normalize my quaternions every minute or two for most of my applications, but I don't see a catastrophic cliff that would require normalization every second.

        Let me know what you think.


        Quat and Vec error graph 01.JPG


        Degraded Quaternion Test.xlsx

    • We may have different perceptions of what is an acceptable level of magnitude deviation.

      I just re-did my magnitude deviation test.

      1.) Feed random numbers into my quaternion forming function that are consistent with the magnitude of typical gyro data.  

      2.) Multiply this new quat by the old quat and store back in the old quat.  

      3.) Loop and periodically check magnitude.  

      The magnitude does change a little faster than I remembered from the last time I did this test.  Even having said that, it wasn't to an degree that would require more than occasional attention.  After ~260,000 loops (10min in my aircraft), the magnitude deviation is at 0.1%.  One tenth of a percent is still very usable, but probably warrants normalization.  The beauty of a quaternion is that you don't have to deal with orthoginality issues (i.e. rotation matrix), even a slightly non-normal quat will provide acceptable vector rotation.  Given the level of noise in both the mag and accel, the quat does not need to be perfect.  If a person was going to the moon, I'd be using a different processor and everything would be Doubles.

      For reference, here's the Arduino program I used for this test:

      #include <Math3D.h>

      Quat Q;
      Vec3 R;
      unsigned long s = 0;

      void setup() {

        void loop() {

        for(int i=0; i<1000; i++)
         R.x = float(random(-10000, 10000)) / 1000.0f;
         R.y = float(random(-10000, 10000)) / 1000.0f;
         R.z = float(random(-10000, 10000)) / 1000.0f;

         Q = Mul(Q, Quaternion(R, 2500UL));


        s += 1000;

        Serial.print(s, DEC);
        Serial.print(" ");
        Serial.println(Magnitude(Q), 6);


  • Have you seen the Circuit cellar article The freespace IMU?

    Basic quat basd IMU the code is availible here:


    •   Ok,  I took a minute to read through this code and indeed it does use many of the same ideas.

       The biggest functional difference I see is that I "stream" my orientation corrections in with my gryo data so that error correction happens at the same time as gyro integration.  In the freespace code, error corrections are independent of the gyro integration, so it then performs unique operations to correct for error.  This looks computationally expensive to me, but they are only performed when it detects error above a certain threshold, so it may work well for situations where the gyros/accel have little  noise or bias error.

        Some of the equivalent functions used in the freespace imu are significantly more time consuming.  Its function for rotating a vector by quaternion take 1.6x more time than the one I use.  The method used for gyro integration takes roughly 2x more time than my method.  This is probably not a big deal in most situations but I enjoy making mine run fast, so I care.

      Thanks again for posting this.

This reply was deleted.


DIY Robocars via Twitter
RT @Heavy02011: #VirtualRaceLeague: @DIYRobocars Race #14 - #ParkingLotNerds join us January 15th for #AutonomousRacing #RoboRace ⁦@DAVGtec…
DIY Robocars via Twitter
RT @chr1sa: And after that came our races, 50 in all. This battle between these two Russians was the best we've ever seen -- incredible fig…
DIY Robocars via Twitter
RT @chr1sa: Before our @DIYRobocars virtual race this weekend, we had a presentation from the team that won the Indy Autonomous Challenge i…
DIY Drones via Twitter
Dec 12, 2021
DIY Robocars via Twitter
Dec 12, 2021
DIY Robocars via Twitter
RT @chr1sa: Just a week to go before our next @DIYRobocars race at @circuitlaunch, complete with famous Brazilian BBQ. It's free, fun for k…
Dec 4, 2021
DIY Robocars via Twitter
How to use the new @donkey_car graphical UI to edit driving data for better training https://www.youtube.com/watch?v=J5-zHNeNebQ
Nov 28, 2021
DIY Robocars via Twitter
RT @SmallpixelCar: Wrote a program to find the light positions at @circuitlaunch. Here is the hypothesis of the light locations updating ba…
Nov 26, 2021
DIY Robocars via Twitter
RT @SmallpixelCar: Broke my @HokuyoUsa Lidar today. Luckily the non-cone localization, based on @a1k0n LightSLAM idea, works. It will help…
Nov 25, 2021
DIY Robocars via Twitter
@gclue_akira CC @NVIDIAEmbedded
Nov 23, 2021
DIY Robocars via Twitter
RT @luxonis: OAK-D PoE Autonomous Vehicle (Courtesy of zonyl in our Discord: https://discord.gg/EPsZHkg9Nx) https://t.co/PNDewvJdrb
Nov 23, 2021
DIY Robocars via Twitter
RT @f1tenth: It is getting dark and rainy on the F1TENTH racetrack in the @LGSVLSimulator. Testing out the new flood lights for the racetra…
Nov 23, 2021
DIY Robocars via Twitter
RT @JoeSpeeds: Live Now! Alex of @IndyAChallenge winning @TU_Muenchen team talking about their racing strategy and open source @OpenRobotic…
Nov 20, 2021
DIY Robocars via Twitter
RT @DAVGtech: Live NOW! Alexander Wischnewski of Indy Autonomous Challenge winning TUM team talking racing @diyrobocars @Heavy02011 @Ottawa…
Nov 20, 2021
DIY Robocars via Twitter
Incredible training performance with Donkeycar https://www.youtube.com/watch?v=9yy7ASttw04
Nov 9, 2021
DIY Robocars via Twitter
RT @JoeSpeeds: Sat Nov 6 Virtual DonkeyCar (and other cars, too) Race. So bring any car? @diyrobocars @IndyAChallenge https://t.co/nZQTff5…
Oct 31, 2021