Current architecture of drone autopilots is wrong , Drone hardware architecture needs to be completely re-engineered.

I know this statement will raise the question who is this guy coming and telling us that current day autopilots are all wrong. Well I have been flying RC Planes from the age of 10 and have been using autopilots from the early Ardupilot of 2009 -2010 vintage till the more recent ones from 3DR, DJI, Feiyutech including their cheap clones  over the last 5-6 years. I have been coding from the age of 15 and am now 21 years old.
Based on my experience with this wide range of autopilots I have come to the conclusion that hardware of majority of autopilots  are adapted from the world of data based computing made for processing huge chunks of predefined data and giving a appropriate notification or display. In the case of data based computing inputs are got from low response data source like Ethernet/internet or some sensor network, this  data is processed and outputs are either notifications or a display and in a few cases some very slow speed controls. Nothing where high speed control of a dynamic object is involved even on a single axis.
Hence  the question : are these processors/hardware  made for controlling a dynamic moving object with freedom across 3 axis’s like a drone??
After using all types of available autopilots I realized that the fundamentals of  drone control at its core requires the following steps to be done repeatedly as fast as possible
1. reading  sensor values and conveying them to the controller/processor
2. filtering  these  sensor values
3. pushing the filtered values  into a PID loop
4. transferring control commands to the actuators for immediate action.

This cycle needs to be repeated over and over again the faster the better . This is what determines the stability of the drone the higher the cycle time the higher the stability .So what is needed in the case of drones is a continuous high speed input –output action reaction control system. I realized that drone control is not so much about data crunching as about speed of the control cycle.

If the use of  drones has to grow developers have to be given freedom to code for their applications without compromising this core control cycle. In the case of drones a developers code resulting in a system hang will result in catastrophic outcomes like either crashs or fly aways, both which have been regularly reported in current autopilots. Achieving high control cycle speeds & isolating the flight controls is not possible with the current architecture of sequential processing, hence the future of drones is limited by the architecture of
currently available autopilots.

So unless a new thought process emerges drone use cannot grow exponentially. What is needed is a motherboard that it radically different from anything available today.

I have been working on this for a while now and my first hand experience is that the moment I shifted my focus to achieving higher speed control loops with my self  designed autopilot the level of stability and performance I have been able to get are awesome even in very high costal wind speeds on a small 250 mm racer. I achieved this with the most primitive of micro controller used in the first ardupilot  the  ATMEGA 328. Things got even better when I replaced the MPU 6050. IMU with the MPU 9250.

With my custom made Distributed Parallel Control Computing Bus I have been able to achieve Altitude hold with a total drift  accuracy of less than 1 meter  and a very accurate heading hold as well as GPS navigation on the 250 mm  racer. All I did was to add another ATMEGA 328 in parallel to the first one to add on features.

Thanks to this I am able to completely isolate the core flight control loop from the APP development coding there by the drone is never compromised by faulty APP development coding.

Distributed parallel control computing I have found from my own experience is an architecture that really has the potential to create exponential growth in drone applications. I would be interested to know of any other ways by which others are trying to address this core unique control processing requirements of drones.

Views: 12287

Reply to This

Replies to This Discussion

Somewhat ironically given the topic at hand Darius Jack was banned for, among other things, failing the Turing test.

Very good!  Keep up the great work!


Some of the orginal Swiss pixhawk implementations were on the gumstix processors that have both an ARM and a fast DSP.    The DSP is good for the core control loops.

Here is a link to the gumstix site.


Yes the point of adding further features reliably (instead of every-time a feature is added test the code for 20 hrs) , expand the options on the AP , adding further processors , simpler SDK (JAVA based) , Was where this idea of mine originated , Then as above i had to do a number of test and research to understand if i was on the right track or was i just wishfully thinking. Then i thought of the most reliable processor i have ever used (look no where !! it is the ATMEL AVR , reliable during production and operation as well as low cost). I had made a basic flight stabilization module before (Something like a Multi-wii ) on a 328 , So i thought why not try it using 2 on a bus where one is the Kinematic Control Unit (KCU) and the other the Interface and Higher Operation Unit(IHOU) . Had Quiet a few nightmare's which questioned the foundation of my technology , But i overcame them by trying to implement parts of current filters and fusers instead of Blocks of code (Was very Stingy on the Memory Usage , so shifted to .asm in some libraries) .Using the 328 had another advantage of improving the my coding efficiency by 4-5 fold cos i had limited resources to use :).

It worked at the end , it was tough to develop the FC , but it isn't tough to expand it , as any extra cores added and interfaced is never going to affect the core Kinematic Control in any which way (So no Crashes , while developing app) . Will ping you , Looking forward to you being a beta test for my FC ;-).

My 2 cents worth - on any given day I'll switch between the "infinitely parallel" languages of VHDL and Verilog to the sequential style of embedded code. This affects my intuitive view of how things interact to such an extent that I find the sequential operations of software, even when they are wrapped in a multi-threading RTOS, quite disturbing!

In an FPGA system I would expect a task to be competed in somewhere between 2ns and 20ns and the abstract language used to design these mystical devices allows it to run an almost unlimited number of tasks at the same time.  In a processor based system the same task would take microseconds or even milliseconds and then it would need to do the next task and the next and so on. The cumulative time could add up to hundreds of milliseconds.

It would be difficult to convince those of you who have trained your brain for so many hours to see things as sequential processes why there is such elegance in realtime parallel processing, but let me give an example:

Try to perform any action like walking through a door or making a cup of tea by doing only one thing at a time. You can switch between tasks as often as you like so that you mimic an RTOS or you can complete a task before switching to the next one. For this experiment, you must consider every little action that you make including breathing, blinking, moving your eyes, moving any part of your body and thinking.

This gives you some idea of how the world feels to a conventional microprocessor based system. Now repeat the above task as you would normally and you get an idea of how a decent FPGA system feels.

From a practical perspective, conventional processing may well be "fast enough" to manage the things that most drones will be used for. But imagine the possibilities...

It's amazing how bright 21 year olds are , specially since that's only six years since you were 15. Just imagine how great you're going to be when your 27. Do that 5 more times and you'll be 57. Then you will look back and see how smart you really were.

Somethings else to think about:
air travel have a finite speed and blades turn at a finite speed. Mass can only be accelerated based upon a finite Force.
Trying to make control directions before the mechanical action has even begun from the last correction it's hardly worthwhile.

If you can only type 10 characters per second on a keyboard sampling the keyboard thousand times per second is not useful.

We worked on the first lunar lander software, probably before your father was born, with very simple computers.

You can make really fast control loops, with slow hardware if you understand and Nyquist sampling criteria and don't waste computation time when it's not effective.

This is why I like this forum... such interesting information from such smart individuals....

its humbling....

great post!


Remember your proposal on the from companion computer group?

It guess it would really be appropriate for this type of architecture.

Hi LD,

Another point on which we have significant concurrence.

Cone Laser scanner is in process with Nvidia TK1.



Sounds like you've done a lot of work! I think a few things need to be clarified though so here are my 2 cents:

  • Faster updates rates are better when using discrete sampling since the underlying continuous system can be better approximated - this is particularly true for highly dynamic vehicles. However, you tend to see diminishing returns.
  • There is a maximum speed you can send the commands to the actuators and the speed with which the actuators can make a meaningful input depends on their dynamics (rotor inertia, servo speed etc).
  • In my opinion, separating the flight critical tasks from non flight critical on separate hardware is a good way to continue development while maintaining reliability. This is why companion computers are gaining traction.
  • Although parallel distributed architectures can have merit, in my opinion, separating flight critical tasks on separate hardware decreases reliability due to increased complexity and having multiple processors that could cause a system failure.
  • Assuming the hardware can run the estimation and control algorithms sufficiently fast and the vehicle is well designed, I'd say the main contributor to performance is the algorithms themselves and the quality of the sensors. This is not only the algorithm architecture, but also their (correct) implementation. Sometimes the algorithm may seem like its working, but the performance may be inhibited by a small bug in the implementation.
  • An EKF is not a type of SLAM algorithm, however an EKF can be used within SLAM as the data fusion mechanism (as can many other algorithms). An EKF is an extension of the Kalman filter which is used to estimate quantities by fusing sensor data. Many other variations exist, including unscented Kalman filters, sigma point etc. In the context of UAVs, the EKF normally estimates position, velocity, attitude etc however it could estimate much more, such as thrust, wind, vehicle drag coefficients etc.

@ David James:

Yes , I agree a DSP is a boon while  creating an Autopilot as it serves like a Math Offload Core ,

In Fact in further designs I will be implementing a TMS320F6678 (as interface Core). there are ways to enhance speed on normal MCU and MP’s ,

I realized that I was using tangent quite a bit in all my calculations like Gps Nav , Fusers , some Kinematic calculations , so I decided to divide 360 degrees into 720 parts , giving me a resolution of 0.5 degrees , and stored all in the program memory except for 90 n 270 as int16_t, created a custom function called “float myTan(int16_t angl)” , hence my values spanned from -32767 to 32767 , divide by 100 à range= -32.767 to 32.767 , it was a bit on the flash , but boy it was damn fast as compared to native method.

@Lazer developer :

My architecture is individually sequential , but combined it is parallel dual core , real time (Right down to pico second) parallel processing system.

As in :

1)      When the sensor is being sampled in CORE1 , CORE2 is doing GPS Math completely independent (even without any hardware dependency).

2)      When the  WriteMotors() function is called in CORE1 , CORE2 processes the GPS Interrupt .

What I am trying to get at is individually it is sequential , but as a whole is a Hardware Layered RTOS , hence even though they are separate , their individual throughput is higher than the system as a whole.

What are the advantage as compared to a single chip based Hw solution using same resources:

1)      Since they are isolated and independent , no failure can occur on the main control core  due to H/W problem on Higher Operation unit (CORE2) .

2)      If a person is developing an APP and is not very conversant with control events and H/W lines being used by AutoPilot , then any code developed by him can potentially lead in a system crash.

3)      FPGA’s are very good in performance (I Have used to MOJO FPGA) , but expensive to procure and more complicated to manufacture reliably as if Pins are exposed to static during production the gates get discharged and the FPGA will be un usable.

Like , for example I recently developed an app for a person , who wanted a flash light trigged from a height of 50m (like a flashlight), AS mentioned before I have developed libraries for embedded C++ and JAVA compatible for UART . What has to be done is connect a SOC like RP2 or BEAGLE BONE anything having JVM running on it via H/W UART port to the AutoPilot’s Core 2 UART.

Have a look at the code attached , it is a live app I am working on , you can see none of the While Loops will cause any problem , and lets also assume suddenly if the RP’s Governor fails It will not affect the performance of the drone .

As I have mentioned earlier " System Hang is not an option in the case of Drones , hence very early in my AP build I decided that drone flight integrity had to be top priority when making it APP expandable . The Developer should have the comfort in knowing that any programing errors will not result in a crash "



Reply to Discussion


© 2020   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service