I know this statement will raise the question who is this guy coming and telling us that current day autopilots are all wrong. Well I have been flying RC Planes from the age of 10 and have been using autopilots from the early Ardupilot of 2009 -2010 vintage till the more recent ones from 3DR, DJI, Feiyutech including their cheap clones  over the last 5-6 years. I have been coding from the age of 15 and am now 21 years old.
Based on my experience with this wide range of autopilots I have come to the conclusion that hardware of majority of autopilots  are adapted from the world of data based computing made for processing huge chunks of predefined data and giving a appropriate notification or display. In the case of data based computing inputs are got from low response data source like Ethernet/internet or some sensor network, this  data is processed and outputs are either notifications or a display and in a few cases some very slow speed controls. Nothing where high speed control of a dynamic object is involved even on a single axis.
Hence  the question : are these processors/hardware  made for controlling a dynamic moving object with freedom across 3 axis’s like a drone??
After using all types of available autopilots I realized that the fundamentals of  drone control at its core requires the following steps to be done repeatedly as fast as possible
1. reading  sensor values and conveying them to the controller/processor
2. filtering  these  sensor values
3. pushing the filtered values  into a PID loop
4. transferring control commands to the actuators for immediate action.

This cycle needs to be repeated over and over again the faster the better . This is what determines the stability of the drone the higher the cycle time the higher the stability .So what is needed in the case of drones is a continuous high speed input –output action reaction control system. I realized that drone control is not so much about data crunching as about speed of the control cycle.

If the use of  drones has to grow developers have to be given freedom to code for their applications without compromising this core control cycle. In the case of drones a developers code resulting in a system hang will result in catastrophic outcomes like either crashs or fly aways, both which have been regularly reported in current autopilots. Achieving high control cycle speeds & isolating the flight controls is not possible with the current architecture of sequential processing, hence the future of drones is limited by the architecture of
currently available autopilots.

So unless a new thought process emerges drone use cannot grow exponentially. What is needed is a motherboard that it radically different from anything available today.

I have been working on this for a while now and my first hand experience is that the moment I shifted my focus to achieving higher speed control loops with my self  designed autopilot the level of stability and performance I have been able to get are awesome even in very high costal wind speeds on a small 250 mm racer. I achieved this with the most primitive of micro controller used in the first ardupilot  the  ATMEGA 328. Things got even better when I replaced the MPU 6050. IMU with the MPU 9250.

With my custom made Distributed Parallel Control Computing Bus I have been able to achieve Altitude hold with a total drift  accuracy of less than 1 meter  and a very accurate heading hold as well as GPS navigation on the 250 mm  racer. All I did was to add another ATMEGA 328 in parallel to the first one to add on features.

Thanks to this I am able to completely isolate the core flight control loop from the APP development coding there by the drone is never compromised by faulty APP development coding.

Distributed parallel control computing I have found from my own experience is an architecture that really has the potential to create exponential growth in drone applications. I would be interested to know of any other ways by which others are trying to address this core unique control processing requirements of drones.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –


    • @Micha

      Yes the point of adding further features reliably (instead of every-time a feature is added test the code for 20 hrs) , expand the options on the AP , adding further processors , simpler SDK (JAVA based) , Was where this idea of mine originated , Then as above i had to do a number of test and research to understand if i was on the right track or was i just wishfully thinking. Then i thought of the most reliable processor i have ever used (look no where !! it is the ATMEL AVR , reliable during production and operation as well as low cost). I had made a basic flight stabilization module before (Something like a Multi-wii ) on a 328 , So i thought why not try it using 2 on a bus where one is the Kinematic Control Unit (KCU) and the other the Interface and Higher Operation Unit(IHOU) . Had Quiet a few nightmare's which questioned the foundation of my technology , But i overcame them by trying to implement parts of current filters and fusers instead of Blocks of code (Was very Stingy on the Memory Usage , so shifted to .asm in some libraries) .Using the 328 had another advantage of improving the my coding efficiency by 4-5 fold cos i had limited resources to use :).

      It worked at the end , it was tough to develop the FC , but it isn't tough to expand it , as any extra cores added and interfaced is never going to affect the core Kinematic Control in any which way (So no Crashes , while developing app) . Will ping you , Looking forward to you being a beta test for my FC ;-).

      • You made an excellent choice for the CPU. Very reliable indeed. Limited resources leads to efficient code and that leads to fast code ;) I have a bunch of frames waiting for you. Quad's, tricopter, hexa's. I used to help beta testing version 3.2 with Randy. I also helped with the development of the micro VRbrain by Roberto. I found a nasty PPM bug in the first hardware revision. Also looking forward to help with your project. I share your enthusiasm for this project as i am convinced that this is the way forward for flight controllers. ;)

  • Developer
    Hz are the megapixels of drones.

    A faster control loop does not translate to better physical control. Can you show real data which can prove otherwise?
    • Hi kabir ,

      1) Could not understand part 1 about Mega pixels : my post is about the autopilot not video system.

      2) Regarding the control loop question : its been answered both by myself and others like Stephen above .


      • Venkatesh, Kabir was making an comparison with the camera megapixel race. More megapixels do not make better cameras. ;)
        However as a professional photographer i must say he is wrong. Kabir, take a look at the signal to noise ratio and dynamic range of the latest high resolution Aps-c and full frame sensors compared to the lower resolution versions of just a few years ago and you will be amazed. ;)  

  • T3

    As a mechanical engineer who has taken a few control theory classes, yep a faster loop will definitely help with stability.  However in my experience as a hobbyist, a well tuned Pixhawk multirotor is already very stable.  I would welcome faster loops with open arms, but not at the expense of any single existing feature on arducopter.  Good luck!

    • @ Stephen

      Thanks for your wishes .

      Have a look at the video link in my earlier reply , you will notice  that  the stability of the copter is solid across a range of platform sizes .

      As for features all that the arducopter offers is a  default here + in addition the hardware is expandable and APP customizable  , which is not a feature any existing Autopilot offers .

      ZUPPA kLIk is a APP expansion hardware we have created on the base ZUPPA Autopilot that is made for Mapping and Agriculture applications . ZUPPA kLIk is a complete hardware where the autopilot triggers the Camera + logs GPS lat, long , altitude and drone attitude angles on XYZ at the instant of the trigger .

      This provides Geo and Data tagged images for 3D mapping . ZUPPA kLIk is an APP specific expansion of the base ZUPPA Autopilot .

      BTW I am working on an open APP release of the ZUPPA Autopilot just to explain the feature expansion capabilities and should be posting that this week .


  • Hi James & Patrick ,

    Thanks a lot for your comments/ responses to my post , sorry I could not respond earlier because of the pre weekend workload yesterday .


    @James : Thanks a lot for the appreciation of me creating my autopilot ZUPPA .

    Now to get to the rest of your comment :

    My post is not intended to criticise  the current architecture of AP’s at all but on the contrary to stimulate thought to  take AP technology to the next level of creating user friendly and APP expandable drones which will effectively make the drone  a TECH GADGET used by many for various applications  like the Computers or smart phones of today  .


    The goal of the current AP architecture was to be able fly a drone and electronically stabilize along 3 axis’s ie. to a large extent something like the early PC’s that ran on DOS or the early mobiles that we for getting a phone to be mobile . Subsequent evolutions of these gadgets have shown that the more user friendly they got and the more applications that they could be put to enhanced their usage , mass usage and consumption

    Eg : when resistive stylus touch based smart phones gave way to capacitive finger touch thanks to Steve Jobs that was technological evolution .So has the case to be for drones .


    So the current architecture has achieved the goal of flying the drone , but as I see it future AP’s need to make a drone user friendly and APP expandable.

    I believe that the current direction of using higher and higher processing power for AP’s is not the way forward as drone control is not about data processing  like the computers of the past . 

    In the case of a drone a system hang like that experienced on a PC or  smartphone is not an option.

    For Drone to become a TECH GADGET  isolating flight control from other processes and assigning highest priority to it is the way forward .


    So sorry if I have not been able to convey this intention of my post .

    Drone technology needs to evolve to make drone user friendly and APP expandable .


    My statement that AP architecture  in its current state is restricted is based on the following :

    Lets take the case of the Two leading AP’s of today :

    1. 3DR AP’s like APM and PIX HAWK : though they are the only ones usable of both fixed and rotary wing and are APP expandable  to an extent . The user needs to have significant DIY skills to tune it to their chosen airframes . Its only 3DR’s readymade drones that might be user friendly when on full auto mode , but the AP’s are in no way mother boards that can make a drone user friendly and APP expandable .
    2. DJI Naza : Only usable on copters cannot be used on fixed wing and might be user friendly but APP expandability is seriously  questionable .


    I have personally experienced over the past 5 years from the early Ardu pilots till the more recent ones that  these two major  limitations are preventing ultra local manufacturing of drones.


    A Dronepilot Motherboard like the one that opened up PC assembly at the ultra local level is key

    To opening up the  potential of drone use across applications and turning it into a TECH GADGET  .


    The  intention of my post is to stimulate thinking on alternative architectures that could get any drone to be a USER FRIENDLY & APP EXPANDABLE Gadget .


    MY idea was to share my experience with one such alternative that I designed and developed that actually addressed these limitations and converts any drone into a USER FRIENDLY & APP EXPANDABLE Tech Gadget .


    Here are details of my development that reinforce the content of the post that ZUPPA my AP that uses DPC has the ability and the potential to make any drone a USER FRIENDLY & APP EXPANDABLE Tech Gadget  :

    a)      Block diagram that compares my DPC and conventional AP  architecture attached below.

    Specification : Enclosed Data Sheet ZUPPA V1


    Videos : https://youtu.be/lOvP3wjVj_s


    We have gone through multiple iterations in the design of this AP .


    ZUPPA AUTOPILOTS FLY  both copters and fixed wing .


    In fact thanks to the fact that this design isolates the flight management to core 1  I am able to work on a model of having the  APP interface open source so that any coding errors during the process of app development will not crash the drone there by reducing the risk for developers . 


    @ Patrick : your right in a way enclosed is a photo of my AP board that in a sense is like putting a couple of nanos together .

    But , I am using direct C and assembly to code primary AHRC(Attitude and Horizon Reference Controller (Core 1)) while Core 2 is done on AVR C++.

    Why Primitive Language ? Frankly coz the native reflection of my code will have a lesser impact on the actual execution cycle (additional instruction(Implicitly generated) while HIGH to LOW LANG Conversion).

    Conceptually it's like the first nano only managing flight parameters like stability, heading, altitude ie drone attitude  and since this its only function without any form of extraneous interferences it does it efficiently at a high speed . While the second nano ( core 2 ) handles the GPS maths ,velocity vector Models, user apps etc . My GPS system creates its own grid as soon as locked(PDOP based). Hence , for future SLAM algorithms (for spatial map generation) the stationary features (statics) can be placed on the grid after being identified by an external sensor ie: Xf={[xi , yi]}; can be represented on the grid.

    The Core 2 also handles all receiver commands from the user and UART interface from GCS thru module, the power manager also connects to this core as well as the GPS , hence the second core is actually more loaded than the Core1 (Kinematic Control core) . But , that doesn’t affect performance at all as the fastest correction from Core2 has to be generated at a max update rate of 50Hz (IMU Velocity correction) and its performance has no direct affect on the final drone’s performance. Even after all the maths is calculated for navigation and real time positioning , the loop cycle on the 328 is 6mS (Worst case Time Complexity) , 4mS (Best Case Time Complexity) .

    The Core 2 doesn’t have any use for the 16-bit timer , it is used it to keep check of the time from latest cycle start , and JMP the Program Counter to cycle end if it exceeds 10mS (which happens approx. every 10,000,000 Cycles if UART is updated at 50HZ , it doesn’t happen at typical <=30Hz UART update) . Some thing like a timer based instruction governor which is used to prevent system hang , hence reliability is very high .

    I have developed the code for sensor fusion not on absolutes but on relatives just executing the level of fusion required to achieve the desired goal of ultra stable flight. So repetitive high speed less accurate relative fused sensor value processing results in continuous high speed corrections that achieves the same goal of ultra stable flight .

    This in essence is the key difference between Data computing and Control computing

    eg. the imu’s vertical velocity (after external normalization and compensation) is fused with the Baro Rate reading ,this is done through a Differential eqn. based autocorrelation fuser hence based on the incremental auto correlation of barometric rate we correct the gain which fuses the vertical velocity with the barometric rate to produce a fused value as absolute , additionally I have conducted experiments by which I have made the vertical velocity absolute. So , yes I have evaluated the current fusion algorithm , understood them and implemented segments of them instead of the whole fusion block . They are not as accurate as the absolute sensor fusion algorithms , but they run much faster , In turn we can push the OP to the PID faster and give a correction faster , so we can give less accurate control input at a higher increment to achieve the ultimate goal of ultra stability . The mag is compensated using an ellipse to circle normalizer , a few rotation matrices to get the axis’s perfectly aligned with reference axis’s  and vector product compensation


    The fact that I use the ATMEGA 328’s keeps the costs down while achieving the desired goals of ultra flight stability , user friendliness  & APP expandability is a major factor from the commercial perspective as well.

    Photos of my Autopilot ZUPPA :




    • I like the new concepts, it makes us think ''outside the box'', but in the same time, we make comparison with the existing development:

      Technically, these single modules could be integrated within a multicore ARM processor and run as separate threads that could interact with each other using shared memory or any other pipeline method. Isn't  basically what any autopilot does ?

      Don't you think that adding physical devices for each additional sensors makes it more complicated -and expensive on the long run-  than just linking software modules at compilation time?

      What are the options if you require additional processing power like doing EKF, or any processing that require more memory and bandwidth  (just like the old Arduino Based Autopilot) ?

      • @patrick ,

        The method of running multiple independent threads is a very good idea and frankly , just to let you in on a secret the first version of my autopilot was developed on an arm m3 , to be precise it was the STM32F103RET6 running free rtos . My programming originated in Java (I am a core Java architecture guy ) , so I was more inclined to use threading on the rtos and crack on with the ap.


        I did that one thread for sensor sampling , one for GPS , one for kinematic processing etc..


        Then I tried it , it worked pretty well about 80-85% of the time , some times what I experienced was that anyone of the threads would reach worst case , then the thread will restart causing an inconsistent operation rate , additionally interrupts interfered with each other , so I optimized the ISR , then some times the RMA ( Resource Mgmt Agent , semaphore) did not change the HW lines in time or at all , and yes I had 2-3 fly offs. This got me thinking why is all this happening. I probed deeper and deeper right down to the elemental level , where I found.my answer.


        Now as per my tests , a 250mm racer with no payload has the highest mechanical response time of about 35-40hz , hence as per natural sampling we need to give a correction at the rate of 70-80 Hz on the safe side I say 100hz , for which we need to sample at a minimum of 200 - 500hz  as per natural sampling Fcorr=2 Fsample


        This is a strictly theoretical view , which assumes no external forces and imperfections can exist in the system. But for stable flight we need to assume everything ×10 to balance external forces.


        Here is where the problem occurs , let's probe a little deeper into how an Rtos works , now each thread has an addressed location on the flash with the rtos registers at start up , we have priorities for threads such that the semaphore can assign resources to either of the application when collision occurs . But in its true essence the rtos manages instruction time cycles and hardware resource allocation.

        What I am explaining below doesn’t matter too much if we. are wanting to perform non.real-time / static real-time  Control like say control a 3D printer, but in our case we are doing dynamic near real-time control .


         Now , let's assume two independent threads running at the same time , at high level they appear to be independent but actually there is dependency which is present in terms of priority. If say the sensor sampling thread and the gps processing thread need to be executed both first parse to the instruction buffer then based on FIFO or priority they are popped onto the ALU where they are executed , so if say a large instruction operation causes delays processing on the ALU like for example a gps floating point process , then it affects the execution consistency of the next instruction which can be a sensor sampling one , this is mitigated in certain OS’s  by jumping to next instruction in instruction buffer if one takes extra time but that leaves the current instruction unfinished.

        To mitigate this we choose higher n higher processing power.


        Secondly , hardware resource allocation problems , let's take GPS UART port (UART 1) and say a CLI UART(UART 2) port on the same parent port , when the gps gets data we have the semaphore which assigns the UART 1 to the gps parser thread , then UART 2 is assigned to CLI parser thread , yes they are independent and can never interfere , but at times it goes into a race condition especially when there are multiple interrupts n processes being executed , now the semaphore normally should be owned by 1 process , but it gets owned by 2 at same time, the UART port gets hung in this case , something like an Android app crashing when you are accessing multiple apps at a time. (Please note these observations of mine have been tested using traces , at a task operation rate of 1000-2000hz).

        Plus there is a lot of online information to support what I am saying.


        Finally, the app development creates a major problem on the core kernel , as say if the person developing the app is not very conversant with the way in which an autopilot works , if he codes infinity return statements or infinitely hold the semaphore , then it would result in a system hang(these cases are rare but cannot be ignored as we are talking of a flying object).


        Hence , I then shifted to a hardware layering it , by separating high priority & low priority tasks , and wow!! All the above problems vanished without too much loss in correction speed , in fact the thru put from the primitive sensors to motor output increased by 4 fold for my primary core (Core 1).


        Now , coming to the EKF part , yes the EKF is a very advanced SLAM algorithm which gives us real-time values in terms of vectoral orientation from the earths axis's instead of our traditional method of XYZ model of our local body. No , I don't implement the entire block of the EKF , but parts of it , like for position hold the calculated velocities from the IMU after normalization and compensation , then X, Y and ecef X , ecef Y is passed thru a preferential gain lead lag filter (where again the auto correlation of the speed accuracy from gps is used as the gain) . I have a nonlinear quadrant calculator which fuses the local frame velocities to global frame based on mag orientation . I did a comparison of algorithmic thru put and speed on an cortex M4 (the discovery board F407Z ) I was using eclipse at the time , I used the EKF from open pilot as it was directly Java/eclipse compatible. my code at the true clock of 168Mhz without delay just running on int main the EKF executed with dummy sensor values (just to understand the thru put and execution time from a complexity point of view) at (370 - 450) uS while my code was at (118-145) uS , yes my code will not be as accurate as EKF as I am calculating pseudo N,E ,D directly from mag heading , while EKF does it using the magnetic flux on each axis and the speed and heading accuracy , but , I can give a faster correction incrementally to achieve an accurate position hold.


        Looking forward to your replies as I can explain more of the research I have done.

        " ZUPPA Autopilots Goal is to take AP technology to the next level beyond what is available today "

This reply was deleted.


DIY Robocars via Twitter
How to use the new @donkey_car graphical UI to edit driving data for better training https://www.youtube.com/watch?v=J5-zHNeNebQ
DIY Robocars via Twitter
RT @SmallpixelCar: Wrote a program to find the light positions at @circuitlaunch. Here is the hypothesis of the light locations updating ba…
DIY Robocars via Twitter
RT @SmallpixelCar: Broke my @HokuyoUsa Lidar today. Luckily the non-cone localization, based on @a1k0n LightSLAM idea, works. It will help…
DIY Robocars via Twitter
@gclue_akira CC @NVIDIAEmbedded
Nov 23
DIY Robocars via Twitter
RT @luxonis: OAK-D PoE Autonomous Vehicle (Courtesy of zonyl in our Discord: https://discord.gg/EPsZHkg9Nx) https://t.co/PNDewvJdrb
Nov 23
DIY Robocars via Twitter
RT @f1tenth: It is getting dark and rainy on the F1TENTH racetrack in the @LGSVLSimulator. Testing out the new flood lights for the racetra…
Nov 23
DIY Robocars via Twitter
RT @JoeSpeeds: Live Now! Alex of @IndyAChallenge winning @TU_Muenchen team talking about their racing strategy and open source @OpenRobotic…
Nov 20
DIY Robocars via Twitter
RT @DAVGtech: Live NOW! Alexander Wischnewski of Indy Autonomous Challenge winning TUM team talking racing @diyrobocars @Heavy02011 @Ottawa…
Nov 20
DIY Robocars via Twitter
Incredible training performance with Donkeycar https://www.youtube.com/watch?v=9yy7ASttw04
Nov 9
DIY Robocars via Twitter
RT @JoeSpeeds: Sat Nov 6 Virtual DonkeyCar (and other cars, too) Race. So bring any car? @diyrobocars @IndyAChallenge https://t.co/nZQTff5…
Oct 31
DIY Robocars via Twitter
RT @JoeSpeeds: @chr1sa awesomely scary to see in person as our $1M robot almost clipped the walls as it spun at 140mph. But it was also awe…
Oct 29
DIY Robocars via Twitter
RT @chr1sa: Hey, @a1k0n's amazing "localize by the ceiling lights" @diyrobocars made @hackaday! It's consistently been the fastest in our…
Oct 25
DIY Robocars via Twitter
RT @IMS: It’s only fitting that @BostonDynamics Spot is waving the green flag for today’s @IndyAChallenge! Watch LIVE 👉 https://t.co/NtKnO…
Oct 23
DIY Robocars via Twitter
RT @IndyAChallenge: Congratulations to @TU_Muenchen the winners of the historic @IndyAChallenge and $1M. The first autonomous racecar comp…
Oct 23
DIY Robocars via Twitter
RT @JoeSpeeds: 🏎@TU_Muenchen #ROS 2 @EclipseCyclone #DDS #Zenoh 137mph. Saturday 10am EDT @IndyAChallenge @Twitch http://indyautonomouschallenge.com/stream
Oct 23
DIY Robocars via Twitter
RT @DAVGtech: Another incident: https://t.co/G1pTxQug6B
Oct 23