Hi all,

  I have recently started writing a series of blog posts that detail how I have chosen to program my own IMU complimentary filter.  I have done this for my own entertainment and as a learning opportunity.  Hopefully others will find this interesting.

 Here's basics of what this complimentary has to offer:

  1. Quaternion based attitude estimate
  2. Never requires re-orthogonalization unlike DCM or any other 3x3 rotation matrix solution
  3. Almost never requires re-normalization of the attitude estimate Quaternion
  4. Requires no trigonometric functions or floating point division, so it runs fast.  Can process 6dof samples at 500Hz on an Arduino if non-IMU functions are light.  Functions nicely at lower sample rates (~100Hz) if processing time is needed for other operations.
  5. No gimbal lock in any orientation
  6. Fully 3D capable after initialization.  Will properly counter attitude estimate drift and error regardless of the amount of time in any orientation.

The current posts cover the following:

Future posts will build on the previous posts while examining the following:

  • Magnetometer usage and implementation
  • Example code for my 9dof complimentary filter
  • GPS and Baro implementation with simple velocity and position estimation
  • Example code for my 10dof+GPS complimentary filter
  • Aircraft control based on quaternion data
  • Anything else that strikes my interest... or is requested ;-)

  My goals have been to write a filter that runs fast, behaves nicely in any orientation, and can be used to in my own multi-copter control software.  So far I am happy.  This is not intended to be the "best" complimentary filter in any way, just me having fun writing code.  Nothing I have or will present is ground breaking, although I haven't seen any implementations quite like mine.  I hope this is valuable to someone.

Regards,

Phillip

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

    • I had not seen that.  With a 2 second glance that looks very similar.

      I will have to take a closer look when I have a minute.  Thanks for posting.

  • Hey Phillip, I've been toying with your code for a ACC+GYRO IMU and I'm getting weird results if I inject drift onto the yaw axis... 

    I'm just playing in simulation, so what I've done is made a static MPU:

    (I'm working in python)

    class mpu:
      def __init__(self):
        self.aX = -0.25881904510252076234889883762405 # fwd  
        self.aY = 0 # left
        self.aZ = 0.9659258262890682867497431997289 # up
        self.gX = radians(0.0) # roll
        self.gY = radians(0.0) # pitch
        self.gZ = radians(0.1) # yaw  <- Z drifiting at 0.1 deg/s
        self.samplePeriod = 0.002

      def retrieve(self):
        pass

    So if I loop through, it moves the estimate towards the 15 degree tilt as expected, but as the yaw accumulates it sloshes between the pitch and roll axes instead of staying on the pitch. This could be a problem with my conversion of the RotationQuat to roll/pitch/yaw values... I notice that that function is missing from your repository:

    https://github.com/daPhoosa/Math3D/blob/master/Math3D.h#L425

    The attached file is my python port of your library with some test code at the bottom.  

    test2.py

    • Ah yes - I think it was my rpy conversion (axis ordering). 

      Here's a test running with all axes drifiting and the resulting Roll/Pitch is stable. 

      test2.py

      • And one more tweak... 

        Add a GyroDrift variable to zero out the Gyro over time:

        GyroDrift = Vector(0.0, 0.0, 0.0)

        ...

        GyroVec  = Sum(GyroDrift, Vector(MPU.gX, MPU.gY, MPU.gZ))

        ...

        GyroVec = Sum(GyroVec, correction_Body)
        GyroDrift = Sum(Mul(GyroDrift, 0.999), Mul(Sum(GyroDrift, correction_Body), 0.001))

        test2.py

        • Very cool.

            I forgot to commit the latest version of my 3Dmath header.  Thanks for catching that.  I only added the YPR function for this example and don't actually use it for any of my projects.

            I do have a form of my complimentary filter that incorporates gyro drift correction but I did not include it in this example to avoid making it more complicated than needed.  I would recommend implementing it slightly differently.  You appear to be accumulating the error in the world frame, but if the actual drift is caused by a bias in the sensor itself, then the true gyro drift is actually related to the body frame.  Your solution will result in good performance if the body stays on a straight path or varies slowly.  If the body reverses direction quickly (180deg turn), then the current accumulation of error will be applied incorrectly resulting in a short term disturbance while the error accumulates in the opposite direction until it hits steady state again.

          Example:

          ...

          // cross product to determine error
          correction_World = CrossProd(Accel_World, VERTICAL);

          // rotate correction vector to body frame
          Vec3 correction_Body = Rotate(correction_World, RotationQuat);

          // accumulate gyro drift in body frame
          GyroDrift = Sum(GyroDrift, correction_Body);

          // add a small portion of the GyroDrift accumulation to the correction vector
          correction_Body = sum(correction_Body, Mul(GyroDrift, 0.01);

          // add correction vector to gyro data
          GyroVec = Sum(GyroVec, correction_Body);

          ...

          Effectively this is an PI controller with a P_gain of 1/sample_rate and an I_gain of 0.01/sample_rate.  When programmed like this, initial gyro bias calibration is not really needed.
          Thanks for sharing your experience.
          • I think we're doing roughly the same thing.. I don't see where I'm accumulating in the world frame. I just wanted to keep the GyroDrift variable noise free, so it builds with it's own time constant towards correction_Body. 

            •   You're right, I miss-understood the code in your comment and you indeed are not accumulating the error in the world frame.

                I believe you are effectively applying a very strong LPF to the correction vector and that becomes your drift estimate.  I'm partial to my solution as it requires less processor time, but that is probably not a significant consideration for anything that runs python.

               If noise becomes an issue with my I term, I can decrease the gain.  

              • Had some thoughts:

                What if you instead of rotating the acc into the earth frame, you instead rotated VERTICAL into the body_frame? This lets you compensate for yaw drift when titled:

                if abs(gravity_b_v.x) < cos(radians(15)):
                   gyro_drift_b_v.x = (gyro_drift_b_v.x * (1-drift_factor))
                                           + ((gyro_drift_b_v.x + c_rate_b_v.x)*drift_factor)

                if abs(gravity_b_v.y) < cos(radians(15)):
                    gyro_drift_b_v.y = (gyro_drift_b_v.y * (1-drift_factor)) + ((gyro_drift_b_v.y
                                            + c_rate_b_v.y)*drift_factor) 

                if abs(gravity_b_v.z) < cos(radians(15)):
                   gyro_drift_b_v.z = (gyro_drift_b_v.z * (1-drift_factor))
                                           + ((gyro_drift_b_v.z + c_rate_b_v.z)*drift_factor)

                Also, shouldn't you update the rotation estimate -before- you use it to check the vertical vector? i.e move steps 5&6 between steps 1&2 to keep the estimate in sync with the current MPU samples?

                • Thanks for the video.  I will definitely be installing python and playing with it.

                  As a semi-unrelated subject, I saw your discussion a year ago with Rob about the Stability control loop improvement.  It has inspired me to play with making a constant jerk implementation.  I have an initial excel spread sheet test done and I hope to code it soon.  Limiting accel and velocity magnitudes made my head hurt, so at the moment it's pure constant jerk.  I hope to add those other constraints as well in the future.

                  •   I had not seen that.  It is interesting, but I think of little value for what I'm doing.  Given the relatively few vectors that need to be rotated for an IMU, the cost to use ternions is significantly higher than using quaternions.

                     With the equation I use for rotating a vector by a quat, I use 15mul and 15sum.  His equation for rotating by a tern requires creating a 3x3 matrix using 12mul, 2div and 13sum.  This is followed by the actual rotation event which requires an additional 12mul and 6sum.  If a person was rotating many vectors each time the M3x3 was created, the cost would be lower per vector.

                     In his comparison, he assumes vector rotation using the quaternion long hand (v' = qvq-1) which takes ~55% longer than the method I use.  If he was comparing ternion to this faster quaternion method, the advantage would be significantly less clear than is shown.

                      Chaining rotations does appear to be be roughly tied.  The ternion multiplication shown requires 12mul, 1div,  12sum and 1 branching evaluation.  Quaternion multiplication requires 16mul and 12sum.  In small microcontrollers,  divisions are very expensive so I think the ternion might be slightly faster, but it's hard to say how much.  Given I do this once per control loop, it is not worth the losses incurred by rotating vectors with ternions.

                      Geometric error is of no concern for an IMU complimnetary filter since orientation correction is happening continuously and sensor noise and drift dwarf this source of error.  Re-normalization is so rarely needed in my usage as to be negligable excepting cases where prolonged operation are to be expected.  Even then, a normalization operation every few minutes is not a significant burden.

This reply was deleted.

Activity

Neville Rodrigues liked Neville Rodrigues's profile
Jun 30
Santiago Perez liked Santiago Perez's profile
Jun 21
More…