I have been working on making a flight controller for my quadcopter for a few months now. I’m an electrical engineer by training and have always been fascinated by things that fly. When I first saw a quadcopter last year, I immediately wanted to learn all about it and make my own. So I read up about PID controls, embedded systems programming, got myself an Arduino and did a few sample projects to gain some experience with programming the Arduino. I’ll write a detailed blog about my experience once I’m finally done, which I hope will be useful to others who want to design their own quad. For now, long story short, I’m now at a point where my Arduino based quad is flying, but struggling with stability. The one axis controls are working perfectly, but when I let it fly, it  sometimes dashes off in one direction instead of hovering in one spot and just behaves erractically. I’m working on a flight logging system that will allow me to log the output of the sensor fusion system and the PID controller to help diagnose what’s going on.

The purpose of this post is to ask some questions about sensor fusion. The first sensor fusion method I implemented was based on integrating the gyro outputs to get the current orientation and then using a complimentary filter to combine orientation obtained from gyro integration and accelerometer input.

The outline of the method is as follows:

  1. Obtain gyro data, subtract offsets. Offsets are calculated at startup by averaging gyro data when the quad is at rest for 1 second.
  2. Adjust for IMU specific scaling factor so the data is in degrees/second
  3. Integrate angular velocities to get angles:
    float gyroAngleX = fgyroX * dt + fLastGyroAngleX;

          float gyroAngleY = fgyroY * dt + fLastGyroAngleY;

          float gyroAngleZ = fgyroZ * dt + fLastGyroAngleZ;

     4. Obtain angles from accelerometer and combine with angles obtained from gyro integration using a complimentary filter

       // Angle of rotation about Y axis computed from the acceleration vector

       float accelAngleY = atan(-faccX / sqrt(faccZ*faccZ + faccY*faccY)) * RADIANS_TO_DEGREES;

       // Angle of rotation about X Axis computed from the acceleration vector

       float accelAngleX = atan(faccY / faccZ) * RADIANS_TO_DEGREES;

       const float alpha = 0.98;

       float angleX = alpha * gyroAngleX + (1.0 - alpha) * accelAngleX;

       float angleY = alpha * gyroAngleY + (1.0 - alpha) * accelAngleY;

       float angleZ = gyroAngleZ;  //Accelerometer doesn't give z-angle

 

This method gives good results with the gyro providing consistent, noise free angular velocities and the accelerometer helping to overcome the drift that gyro naturally suffers from. However, as noted in numerous articles online, using the acclerometer data has a couple of pitfalls. The data is noisy and (as the name implies) reports real acceleration, not just the accerlation due to gravity vector that is required to determine the instantenous orientation of the quadcopter body frame. So I was wondering if my sensor fusion method was causing the instability that I was seeing during my flight tests.

I poked around the Ardupilot code to see what method they are using for sensor fusion. As an aside btw, I’ve learnt so much from browsing through the Ardupilot flight controller. It can be a bit hard to read, but it has provided me some valuable insights about flight control systems. Thanks a million to all the amazing people in the Ardupilot community who spent time and effort on creating this incredibly useful system and making it available to everyone.

I discovered that Ardupilot uses DCM algorithm which is described in an excellent article titled “Direction Cosine Matrix IMU: Theory” on https://code.google.com/p/gentlenav/. So I went ahead and implemented the algorithm to see how the output compares with that of the compimnetary filter based sensor fusion. I’m not using a GPS so I’m not applying a Yaw correction. I’m only using the accelerometer data to compute the roll-pitch correction. Since I don’t have any data about the velocity vector, I’m also not applying the centrifugal acceleration estimate outlined in equation 25. So my DCM algorithm involves:

  1. Calculate roll_pitch_correction_plane = [rzx rzy rzz]cross[accx, accy, accz]
  2. Total correction = roll_pitch_correction_plane
  3. Omega_p = Kp*TotalCorrection
  4. Omega_i += Omega_i + Ki*dt*TotalCorrection
  5. Omega_correction = Omega_p + Omega_i;
  6. Use Omega_correction to update the rotation matrix and renormalize as explained in Equation 17

Kp and Ki and chosen to be minimum necessary to quickly cancel out gyro drift. The results are shown in the screen captures shown below. The screen captures are of a QT application that I built that allows me to plot data received from the Quadcopter and send it commands (start the motors, increase speed, change the PID control parameters etc). The application has been invaluable in helping me understand what’s going on and debug.

Some info about the hardware I’m using. The flight control system is running on an Arduino Mega. The sensors (MPU6050) are being polled at 100Hz and sensor fusion is also running at 100Hz. The result of the sensor fusion are being sent to the QT application using Xbees at 10Hz. The MPU6050 has an onboard MPU that does sensor fusion on hardware, but I’m not using that.  

In the figures below, the red line shows the output of the DCM algorithm (Method 2) and the gray line the output of the complimentary filter based sensor fusion (Method 1). The green line is the output of the PID controller which wasn’t active during my experiements, so please ignore that for now. Each panel shows the data collected over about 10 seconds. Since data is being sent at 10Hz, there are about 100 sample points on each graph. 

3691193114?profile=original

This chart shows the output of the two sensor fusion algorithms with no acceleration correction. Method 1 (where each gyro axis is being integrated independently) exhibits a much higher drift.             

In the charts below, acceleration correction is being applied. I tried to keep the magnitude of the acceleration correction as low as possible while being high enough to cancel out gyro drift.

 

3691193021?profile=original

Here I’m shaking the quad vigorously. Significant differences are observed between the outputs of the two methods. As I was shaking the quad, I never took it across neutral roll position. The output of method 1 crosses the neutral line on the roll axis which suggests that the results are incorrect.

I’d appreciate if members of the community can provide some insight into this. Is the DCM approach much more effective? It certainly is much more computationally intensive than the complimentary filter based sensor fusion, so I’m assuming there must be a compelling reason to use it on resource constrained hardware.

The other obvious difference between the two methods is that in the complimentary filter approach, each gyro axis is being integrated completely indepdently of the other. While in the DCM approach, the gyro vector is being used to calculate the rotation correction which is then multiplied with the current rotation matrix. So it mixes the rotations along various axes in a mathematically correct way. Any other insights here? 

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  •   Direct gyro integration is problematic unless you only intend to only hoover in place.  For example, if you start out level pointing away from your self and then pitch up 30deg and followed by a yaw right 90deg, your actual final orientation will be pointing to the right with level pitch and 30deg of roll.  Direct integration of the gyro will tell you that you are pointing right with 30deg pitch up, this is wrong.  To counter this you will have to give the accelerometer very strong control over the estimated pitch and roll to quickly counter these types of error, which will cause a large magnitude of accelerometer noise to effect your attitude estimate.  The fact that you can fly at all is amazing.

    • Hi Phillip, thanks. That certainly makes sense, though in my direct integration implementation, even a very low weight for the pitch/roll computed from the acceleration input in the complimentary filter appears to be sufficient to keep the results consistent. If the sensor fusion is done frequently enough, even a low weight for the acceleration part is sufficient to keep the gyro errors from accumulating. 

      I do observe significant differences between the two approaches when I shake the quad vigorously as noted in my post. 

      I'm wondering - has anyone succeeded in achieving stable hover (at least 30 seconds) using a direct integration + complimentary filter? 

      • My current complementary filter runs at 400Hz on an Arduino Uno.

        For reference, here's an outline of what I do:

        1. Get gyro and accel data from the MPU6050
        2. Compensate for bias and convert the gyro data to radians/s and accel data to G's
        3. Create a unit quaternion from the gyro data that represent the very small change in in attitude since my last update. LINK
        4. Multiply this new incremental attitude change quaternion by the previous attitude estimate quaternion (This "adds" the two together to give a new current attitude estimate quaternion.  Watch the multiplication order, it matters.)
        5. Use this newest quaternion to rotate the gravity vector (0,0,1) from world frame into the body frame.
        6. Take the cross product of the rotated gravity vector and accelerometer vector, to create my error estimate vector
        7. Divide this error vector by the loop rate (400).
        8. Add this scaled error correction vector to the next gyro reading.
        9. repeat

        Notes:

        • At the speed this can run, all trig functions can use the small angle approximation
        • No where in this process do I need to use a trig function or square root and only one floating point divide.  Otherwise it's only multiplies, addition and subtraction.
        • This is the simplest form of this complementary filter, I do a lot more in my full implementation.
        • Using radians for all angles is critical 
        • This same thing can be done 3x3 rotation matrix (DCM) instead of quaternions.  I prefer quaternions as they never have orthogonality issues and almost never need to be re-normalized (maybe once or twice per hour of operation)

        Phillip

        Incremental Quaternion from 3D Rotational Vector Derivation.pdf
        Shared with Dropbox
        • Thanks Phillip, this is very helpful. My current orientation update using DCM is working quite well now, but I'll implement the quaternion based approach as well to do a quick comparison. 

          Another thing I recently realized is how sensitive the orientation estimates are to vibrations. Adding some vibration damping to the IMU mount has helped a lot. Also, since on the MPU6050, the accelerometer can be polled at 1000Hz, while the orientation computation only has to run at about ~200Hz, I'm using the average of the last 4 accelerometer readings in the DCM computation. That also helps to reduce noise. 

          Thanks very much for taking the time to respond and for the insights that you have provided.

      • I can't say if anyone has done that, probably someone.  

        Considering how easy it is to do gyro based attitude integration with either a rotation matrix or quaternion (thus solving the previous error prone problem), it seems like an odd goal.  Written correctly, neither requires trig functions or square roots so they can run very fast.

This reply was deleted.

Activity

Neville Rodrigues liked Neville Rodrigues's profile
Jun 30
Santiago Perez liked Santiago Perez's profile
Jun 21
More…