I know this statement will raise the question who is this guy coming and telling us that current day autopilots are all wrong. Well I have been flying RC Planes from the age of 10 and have been using autopilots from the early Ardupilot of 2009 -2010 vintage till the more recent ones from 3DR, DJI, Feiyutech including their cheap clones  over the last 5-6 years. I have been coding from the age of 15 and am now 21 years old.
Based on my experience with this wide range of autopilots I have come to the conclusion that hardware of majority of autopilots  are adapted from the world of data based computing made for processing huge chunks of predefined data and giving a appropriate notification or display. In the case of data based computing inputs are got from low response data source like Ethernet/internet or some sensor network, this  data is processed and outputs are either notifications or a display and in a few cases some very slow speed controls. Nothing where high speed control of a dynamic object is involved even on a single axis.
Hence  the question : are these processors/hardware  made for controlling a dynamic moving object with freedom across 3 axis’s like a drone??
After using all types of available autopilots I realized that the fundamentals of  drone control at its core requires the following steps to be done repeatedly as fast as possible
1. reading  sensor values and conveying them to the controller/processor
2. filtering  these  sensor values
3. pushing the filtered values  into a PID loop
4. transferring control commands to the actuators for immediate action.

This cycle needs to be repeated over and over again the faster the better . This is what determines the stability of the drone the higher the cycle time the higher the stability .So what is needed in the case of drones is a continuous high speed input –output action reaction control system. I realized that drone control is not so much about data crunching as about speed of the control cycle.

If the use of  drones has to grow developers have to be given freedom to code for their applications without compromising this core control cycle. In the case of drones a developers code resulting in a system hang will result in catastrophic outcomes like either crashs or fly aways, both which have been regularly reported in current autopilots. Achieving high control cycle speeds & isolating the flight controls is not possible with the current architecture of sequential processing, hence the future of drones is limited by the architecture of
currently available autopilots.

So unless a new thought process emerges drone use cannot grow exponentially. What is needed is a motherboard that it radically different from anything available today.

I have been working on this for a while now and my first hand experience is that the moment I shifted my focus to achieving higher speed control loops with my self  designed autopilot the level of stability and performance I have been able to get are awesome even in very high costal wind speeds on a small 250 mm racer. I achieved this with the most primitive of micro controller used in the first ardupilot  the  ATMEGA 328. Things got even better when I replaced the MPU 6050. IMU with the MPU 9250.

With my custom made Distributed Parallel Control Computing Bus I have been able to achieve Altitude hold with a total drift  accuracy of less than 1 meter  and a very accurate heading hold as well as GPS navigation on the 250 mm  racer. All I did was to add another ATMEGA 328 in parallel to the first one to add on features.

Thanks to this I am able to completely isolate the core flight control loop from the APP development coding there by the drone is never compromised by faulty APP development coding.

Distributed parallel control computing I have found from my own experience is an architecture that really has the potential to create exponential growth in drone applications. I would be interested to know of any other ways by which others are trying to address this core unique control processing requirements of drones.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –


      • Thanks! Luci is quite a bit more compact than those other solutions, not to mention WiFi and Bluetooth built in, which is very convenient instead of Ethernet. 

        And yeah it's not exactly the same, but many of the things he listed is something that a linux level processor could do vs another set of dedicated MCUs. Also those kind of MCU's are being used in mobile phones now similarly to how it was proposed, in fact the Edison has a dedicated MCU component just as a "sensor coprocessor".

    • more info please, does it support more then the intel Edison ?

      • The Edison is built into the board for compactness. However our software (Dronesmith Link, Dronesmith Suite) runs on any linux capable drone platform (i.e. Solo, Phantom) or companion computer with a little bit of hardware config. We are developing more accessory/daughter boards that can bring on more powerful co-pilots as well that can focus on certain heavy calc processes like sense and avoid and object recognition. If you want more info, you can hop on our community slack channel

        Join Dronesmith Community on Slack!
  • Long live the  Kahlman filter as a "very advanced SLAM algorithm", and advanced architectures where "even if the esc’s output Control doesn’t change every cycle , the motor Command changes based on PID and its previous OP". Also UARTs implementing the new Distributed Parallel Control Computing Bus.  

    And the Turing test.

  • Good luck with your project Venkat :)

  • I have mentioned about my self Java analytical tool in my post called Dexter
    Here are some screenshots it's similar to NI signal express .
    Sorry for the haphazard way of posting, I am unable to post from the laptop
    Hence posting from mobile


  • I think I have discussed a lot on the Topic now so those who believe in my concept thanks and can keep in touch with me on my email venkat@muav.in .
    All the others great discussing with you guys learnt a lot from you as well.
    Cheers and bye I am going back into my product mode of development and production so won't be back here for while lot of work to do . You will hear about my product soon , look forward to our roads crossing again

  • Why do I say faster loops are better ?
    Its actually simple , if you guys could refer to my posts earlier , I have adhered to the Nyquist sampling theorem (I have called it Natural Sampling) , I have done test to obtain that the highest response rate of a drone is that of a racer’s , it is 35-40Hz ,have done number of tests , in fact , I built a java based real time data analysis software to precisely do this.

    Please Note , I did not say prediction is bad , in fact without prediction you cannot get stable flight . Again , if I did use raw values , without filtering , without prediction , I would not get a stable flight . What I am saying is lets look at this a little differently, If I use an incremental corrector say a smoother , how do I get it to respond quick as well as not kill the response rate as I see there are 2 simple ways:
    1) couple your raw output with a high refresh rate sensor , some thing like a Imu Z Vel + baro Rate , there are many linear and non linear models to do this (I am not going to go deep into that).
    2) use the raw value after incrementally smoothing it at a high update rate , then couple it with a high response sensor.
    by this we create a Psedo - Digital LPF .
    Now I can update the non liner prediction model each time and each time based on time and exponential gain it will give me an output , It is because of Prediction based on co variance of the two or more sensors , so based on a linear time we can adjust the sensor trajectory using a sensor Gain where in linear models the gain is fixed constant , in non - linear models it is a exponential equation which AGAIN is INCREMENTALLY adjusted according to either auto correlation , co - variance or both .
    Simply put , Values are filtered incrementally , the prediction Update in all models are incremental , the Filter gain is adjusted incrementally , SO why should the correction not be incremental ?? . Now obviously if the sensor input is incremental so should the correction . But , this is where the Real world screws our virtual world which exists inside our head which says any value should be infinitely linear or non linear . Now , if we have a dynamic gaussian external force acting upon the FC (basically unpredictable force like wind , gust , drafts etc.) the response will become broken and discontinuous as in we will have finitely linear and finitely non – linear values , When these are fed into a standard PID , the same thing is reflected on the drone it starts correcting based on say a linear sensor curve , then all of a sudden some force makes the sensor curve non linear , at the time the Differential (Kd) value spikes causing a shake (one of the reason in majority of autopilots large shakes are present by default and has to be tuned when subject to even a small amount of external force like wind).
    How can we correct this ? by smoothing our motor output value just after processing PID , but this kills the response rate , NOW we need to increase the speed of doing this so as to maintain the response rate. So even if the esc’s output Control doesn’t change every cycle , the motor Command changes based on PID and its previous OP . My system doesn’t filter the input values only , like many conventional systems , but filter the outputs based on previous inputs too and so as to enhance response rate We need to increase the timing loop,very similar to a closed loop predictive feedback Filter.

    @Kabir It is better to obtain slower and more accurate sensor values than high speed and noisy ones , because then you need to take out aliasing and raster noise by using a filter , this ultimately reduces the response of the sensor which we wanted to increase in the first place by higher sampling , and so as to maintain the same response rate we go for higher and higher HZ as you say so as to compute the filter faster , but ultimately the difference is stability that I have seen is about max 5-10% (that too visible only on a small size drone (250-360) .
    • Secondly , in my perspective trying to correct/prevent dead reckoning using external vision based sensor’s in real time is trying to build a $1 billion pen and use it to write in space, rather than using a $2 pencil J . There a ways to reduce dead reckoning of the GPS using primitive values like IMU velocities and Magnetometer .
      Here’s a video of my system being used when I was developing the GPS filtering and navigation. Please note I deliberately used a rover as deviations on ground can be seen accurately as well as correlated accurately with real time data : https://youtu.be/8ZBWFTHnKcg
      You can see the GPS accuracy is dead on as it hits the same shrub (which is about 1.5 m across) on 2 separate navigation runs , when I give return to home you can see it comes and stops 1feet away from where it started. It has a overall accuracy of about 1.5m averaged over 20seconds and 2m over 90seconds at a HDOP of <=3 with an extremely low noise coefficient.
      What I am trying to say is that , the core 3Dimentional stability of the drone should be obtained by its primitive sensor’s like IMU , BARO & GPS . The algorithm , fusers and response rate should be perfectly adjusted to get an over all drift < 2m averaged over finite period of time. Once ,this is done you can add vision based sensor to enhance guidance , positioning and navigation. Basically , DJI does this , They use the primitive sensors to obtain a very stable and accurate 3D model with very low drift , then use sensors like OPTICAL flow and vision sensors to reduce or even eliminate the drift incrementally and slowly not in real time. The peripheral sensors are not used to directly correct drift in real time, they are used to reduce inaccuracies over a prolonged period .
      In my opinion using vision to do a real time correction is expensive and highly inefficient.
      As far as the architecture is concerned , let us look at it this way ,when you first start riding a cycle your entire focus is to achieve and sustain balance , once this is done you start thinking about navigating and using the cycle for other uses , the balancing part becomes a reflex action which does not need thinking.
      Simply put the processes the nervous system adopts are reflex and thought out decision making .The reflex part working in real time to give swift corrections but the decision making part taking its time using the stereoscopic image from the eyes , evaluating and predicting outcomes , then giving a correction . Similarly even if you close your eyes while riding , you don’t loose balance but start slowly drifting from original path , this is where the eyes come in they reduce this drift over time but are not used to correct the balance part
      This is exactly how my architecture is as discussed earlier in the post of Joseph Owens and my reply to it on page 4
      I had uploaded a sample program earlier in the thread which can be run on any Raspberry pi with JVM to interface with the Zuppa autopilot . You need not buy dedicated expansion hardware from us for expanding it for you applications , you could buy a Raspberry Pi build your app programs on it and interface it to the Zuppa AP board thru a UART library . Any programing issues you may induce will not affect the drone flight they will only be isolated to your APP not working
      For the benefit of those who missed out on the APP program enclosing it once again J .
      @ Micha : now that you have talked about the economy let me let you guys into a small secret : my first lot of full featured Zuppa Dronepilot motherboard suitable for both copters and fixed wing shall be available next week and would be around 30-40% cheaper than any available AP ( barring the cheap Chinese clones ) today on a single piece basis with a significant possibility of further reduction for quantity buyers . Will upload the data sheet in a couple of days
      The Latency is about 100uS (for one byte of information) from byte form in core 2 à byte form in core 1 , If you take from Receiver inpu
      • FYI the graphite in the $2 pencil poses a danger of shorting out electronics.  Everyone uses the space pen.

This reply was deleted.