I started this project wanting to learn more about the ArduCopter software that was powering my APM board and 3DR Quad-C frame. I found Dr. Owenson's tutorial extremely useful and the perfect starting point. However, I wanted to understand the theory behind the control system in order to get a more fundamental understanding of the control system equations and parameters. I wasn't able to find a complete and detailed explanation of autopilot derivation based on the ArduCopter libraries. I also didn't find a complete system identification process done for a 3DR frame. Therefore, I decided to document my own project including all the equations, electrical schematics, and source code.

I developed the flight controller starting with the equation derivations, through simulation, and finally implementing a flight controller using the latest ArduCopter libraries. I broke my project into a couple of modules. These are detailed below briefly and links are provided with more detail if anyone is interested.

  • System modeling section defines the global and body frame coordinate system along with the non-linear equations of motion for linear and rotational accelerations
  • Model verification section describes the process of identifying the equation of motion parameters that are unique to the specific hardware used. This includes calculating the moments of inertia, IMU sensor noise, and brushless motor modeling. Additionally, motion capture data was used to verify the parameters with accurate measurement data.
  • Control system design section describes the theory behind PID controllers, the nested translation, attitude, and angular rate controllers used, and some practical modifications on the traditional PID to improve performance.
  • Simulation environment section describes the implementation of the control system and non-linear dynamics in software. Specifically, it details the use of MATLAB to visualize the simulations. The simulations environment can also be used to verify PID tuning.
  • Autopilot implementation section describes the implementation of the control system on an APM2.5 board using Arducopter 3.2 libraries.

Lastly, I made a quick video summarizing the development and showing the quadrotor in flight using the autopilot developed. (https://youtu.be/X1rk2wLTY-Q)

Hope this helps others understand a little more of the theory behind the codebase and provides some software infrastructure for others to use on their own projects. I am transitioning to SITL with ROS and will implement this simple autopilot in that environment next. Please let me know if you find any errors in the descriptions, equations, or code. Feel free to message me with any questions.

-Wil Selby

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Awesome write-up, Wil!

    I had a doubt. Does the same equations apply for other quadrotors/Multirotors as well? Say for 3DR's X8+?

    I am still trying to get my head around the control system behind this! :D

    • Karthik,

      The basic equations of motion and control system can remain unchanged from what I originally produced. The equations of motion are described in terms of specific motor speeds as inputs, but you can generalize these inputs to total thrust, and the roll, pitch, yaw moments and it would apply for any multirotor motor configuration/orientation. What will change is the matrix that maps the desired general control inputs to the desired speed of each motor. This desired speed is ultimately what is sent to the ESCs/motors. This is shown on my Control system design page under the "Motor Control" section.

      This changes for 2 reasons for the X8+. The quadrotor I modeled was set-up in a "+" orientation where the X8+ is in the "X" configuration. To see how this changes the equations, you can check out Dr. Owneson's page linked above. At the end of the "Acrobatic / Rate mode control" he posts his motor mixing equations for an "X" configuration which you can compare to mine for the "+" configuration. The other difference is that that X8 has 8 motors compared to 4. I personally haven't worked with the low-level control of an X8 type flyer so this is just my assumption. I think both motors on each arm get the same desired motor speed, and it's just the direction they spin that is different. Basically, the number of motors will change the dimensions of the control to motor speed matrix and the configuration differences will change the entries in that matrix.

      Hope this helps and let me know if you need more detail.

      -Wil

    • Hi Wil!

      Thank you for your reply!

      I will take a look at your recommendations and study up on it.

      You are a life saver!! I will get back to you in a few days after I know more about this!

    • Hi Wil,

      I have to write very fast since formated ascii gets processed into unformatted ascii text at random.

      I am developing ArduDrone controller based on timestamped 3D wavepoints and smooth Bezier curve built in 3D space around 3D wavepoints.

      Such algorithm works fine in case of ArduBoat and 2D wavepoints.

      Smooth Bezier curve in 3D space, built around 3D wavepoints could
      let us save battery life.

      Speed and heading can be extracted from distances between 3D wavepoints projected onto Bezier curve in 3D space and heading from Bezier curve itself.

      I can buyt for $200 electric car for kids, power controlled by turn left/ turn right switch, so driving wheel is wired to power steering via turn left/turn right switches.

      So if you turn your driving wheel to the right turn rigjht switch is on, making motorized actuator to turn wheels to the right.
      So turns are controlled by selecting turn right/ turn left switch on and the time the switch is on.

      Another nice control algorithm.
      (Left/right Boolean selector, time on, time off)
      if time on = time off turning is not activated.

      Pls let me know your opinion.

      BTW
      Could we move with some discussions to Facebook since traffic is low, interest is low.

      darius

    • Darius,

      Send me a quick email at accounts@wilselby.com and we can discuss your project further. I don't have much experience with ground vehicles but your project sounds interesting and hope I can help!

      -Wil

    • Thank you Wil,

      frankly speaking I would prefer group discussion and development since I love interaction.

      I do hope to join a team by Professor Nicholas Roy from MIT, since they have developed 3D drone flight real time  visualization, 3D control

      software,  collision avoidance algorithm, laser-based radar and more R&D tools.

      Joining academia project we can learn more  and more ;)

      Pls watch this video once again to catch what I mean

      http://news.mit.edu/2012/autonomous-robotic-plane-flies-indoors-0810

      Autonomous robotic plane flies indoors
      New algorithms allow an autonomous robotic plane to dodge obstacles in a subterranean parking garage, without the use of GPS.
    • Darius,

      Apologize, I misread your earlier post.

      I would recommend looking into ROS if you haven't already. ArduCopter has some SITL versions that can run using the Gazebo physics simulator. ROS is one of the primary pieces of code used for academic robotics research. The Gazebo simulator is also pretty sophisticated and can simulate collisions and sensor readings.

      The MIT video looks to use some sort of real-time SLAM algorithm either using stereo cameras or a simple depth camera. Once you get the local map, you can identify obstacles and have a high level path planning algorithm create a path around the obstacle, taking into account your vehicles dynamics. That path gets broken down into smaller waypoints that are fed to the robots lower level position controllers. This usually requires some processing power and can be done onboard with a companion computer or streamed from a nearby GCS.

      Using ROS, you can have access to SLAM implementations that rely on the kinect sensor and I believe there are some path planning algorithms implemented as well. You can test everything in the Gazebo simulator using the ArduCopter SITL and then port your implementation to the real system. I don't think DroneKit is fully set-up yet but that would also be an interesting API to bridge your path planning and lower level autopilot control.

      Seth Teller's lab has done similar work using quadrotors and Russ Tedrake has similar work on fixed wing. Both work in the CSAIL lab at MIT as well. Might also want to look at Daniela Rus' lab but she focuses more on multi-vehicle control algorithms.

      Best,

      Wil

  • Awesome write up!

    Most of the code Dev's can be found over here if you wanted to possibly exchange ideas:  https://groups.google.com/forum/#!forum/drones-discuss

    -Mike

    Google Groups
    Google Groups allows you to create and participate in online forums and email-based groups with a rich experience for community conversations.
  • Hi Wil,

    my congratulations on your Autopilot development R&D Paper.

    This is Must To Read tutorial for every operator of a drone.

    Do you have plans to build Second Life's 3D simulation environment for developers of drones from the scratch.

    For the given frame, motors, battery, hardware , sensors, virtual drone is simulated and tested in 3D environment.

    I am just testing robot vaccum cleaners coming with 3 sonar sensors, docking station featuring sonar, IR transmitter.

    I am going to get arduino/ smartphone to control my robot cleaner remotely, to generate virtual obstacles and record GPS track in outdoor.

    Today I play the game of 2 robot vaccum cleaners competing for access to a single charging station.

    I have bought 5 robot vaccum cleaners for tests.

    My congratulations, once again.

    Excellent work, excellent R&D paper.

    darius

    manta103g@gmail.com

    • Darius,

      Glad you found the write-up useful.

      Haven't looked into the Second Life simulation environment yet. I'm starting with ROS/Gazebo since it is already pretty well resourced and documented. It allows users to not only create fairly complex worlds but also add common sensors such as the Kinect for simulation. Since I don't have a lot of space to fly outdoors, simulation is the next best thing. It also allows me to experiment without having to purchase anything. I'm hoping to get some basic SLAM working using the Kinect sensor in the simulation environment then getting it to work in real life using a companion computer and Kinect on-board my quad. Ultimately, I would like to have the quadrotor make a map of the surface and have a ground robot use that map for obstacle avoidance/path planning.

      Thanks for the Second Life pointer and good luck with your ground robot research.

      -Wil

This reply was deleted.