I know this statement will raise the question who is this guy coming and telling us that current day autopilots are all wrong. Well I have been flying RC Planes from the age of 10 and have been using autopilots from the early Ardupilot of 2009 -2010 vintage till the more recent ones from 3DR, DJI, Feiyutech including their cheap clones  over the last 5-6 years. I have been coding from the age of 15 and am now 21 years old.
 
Based on my experience with this wide range of autopilots I have come to the conclusion that hardware of majority of autopilots  are adapted from the world of data based computing made for processing huge chunks of predefined data and giving a appropriate notification or display. In the case of data based computing inputs are got from low response data source like Ethernet/internet or some sensor network, this  data is processed and outputs are either notifications or a display and in a few cases some very slow speed controls. Nothing where high speed control of a dynamic object is involved even on a single axis.
 
Hence  the question : are these processors/hardware  made for controlling a dynamic moving object with freedom across 3 axis’s like a drone??
 
After using all types of available autopilots I realized that the fundamentals of  drone control at its core requires the following steps to be done repeatedly as fast as possible
1. reading  sensor values and conveying them to the controller/processor
2. filtering  these  sensor values
3. pushing the filtered values  into a PID loop
4. transferring control commands to the actuators for immediate action.

This cycle needs to be repeated over and over again the faster the better . This is what determines the stability of the drone the higher the cycle time the higher the stability .So what is needed in the case of drones is a continuous high speed input –output action reaction control system. I realized that drone control is not so much about data crunching as about speed of the control cycle.

If the use of  drones has to grow developers have to be given freedom to code for their applications without compromising this core control cycle. In the case of drones a developers code resulting in a system hang will result in catastrophic outcomes like either crashs or fly aways, both which have been regularly reported in current autopilots. Achieving high control cycle speeds & isolating the flight controls is not possible with the current architecture of sequential processing, hence the future of drones is limited by the architecture of
currently available autopilots.

So unless a new thought process emerges drone use cannot grow exponentially. What is needed is a motherboard that it radically different from anything available today.


I have been working on this for a while now and my first hand experience is that the moment I shifted my focus to achieving higher speed control loops with my self  designed autopilot the level of stability and performance I have been able to get are awesome even in very high costal wind speeds on a small 250 mm racer. I achieved this with the most primitive of micro controller used in the first ardupilot  the  ATMEGA 328. Things got even better when I replaced the MPU 6050. IMU with the MPU 9250.

With my custom made Distributed Parallel Control Computing Bus I have been able to achieve Altitude hold with a total drift  accuracy of less than 1 meter  and a very accurate heading hold as well as GPS navigation on the 250 mm  racer. All I did was to add another ATMEGA 328 in parallel to the first one to add on features.

Thanks to this I am able to completely isolate the core flight control loop from the APP development coding there by the drone is never compromised by faulty APP development coding.

Distributed parallel control computing I have found from my own experience is an architecture that really has the potential to create exponential growth in drone applications. I would be interested to know of any other ways by which others are trying to address this core unique control processing requirements of drones.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

      • Developer
        Indeed. As someone also brought up sensor aliasing effects, if you consider the PX4 flight stack, sensor polling is at 2khz to mitigate effects of coning, etc. Running control loops at that rate is just a waste of processing though.

        The only advantage of higher Hz loops would be to reduce latency, and any modern flightstack has that part well sorted. I've seen quite a few racer pilots say that a faster control loop works better (because the Internet said so) but this is not true. Even the OneShot ESCs don't really help because they're running faster, it is just that thanks to the lower pulse widths, they reduce *control latency*.

        The point im trying to make being, Hz does not give you much, but reducing latency does, especially with an optimal software architecture combined with hard timing hardware. Unfortunately 3 AVRs don't get you there at all. The latency in the intra processor link alone can kill performance.

        I agree with Philippe - especially as he points out, that a quadrotor can only rotate as fast as its eigenfrequency. With physics and dynamics kicking in, your propellers can only go at around 120 Hz even with the world's best ESCs. The limits on the system dynamics are something you cannot simply ignore and say that faster loops equals better control.

        As Phil said, a better system modelling method, and integrating the dynamics model of your system into the controllers makes much more sense. Model predictive control, or even LQR/LQG (considering that the system is well modelled) will give you tangible performance improvements. Again, not something you can do with 3 AVRs.

        Even estimator improvements will give you better results than just faster loops. I've been working on integration of system dynamics model into an vision-based multi sensor estimator for better dead reckoning and sensor offset estimation, and the results are quite satisfying.
  • What we need is a multi-core, multi threaded CPU, like current heap of cheap ARM quad and octocore. Have the flight control kernel loop bound to one CPU instance, and then leave the rest for all the other processing.

    But it could also be done by creating timer interrupt based time slices, prioritizing the core thread.

    This is the way things are going.

  • Developer

    There is a guy here using 3 atmega328 for his FC

    https://www.youtube.com/watch?v=9KYkznAas7Q

    regards

    Andy

    • Thanks Andy , proves the concept works well .

      Venkat

  • @Joseph Owens

    Bang On you have understood the concept perfectly

    Thanks to you and for the benefit of the other forum members the concept is as below:

    “ The proposed architecture of parallel processing mimics the body .

    We all have two ways of responding to external information :

    a)      Reflex response to sensory information  : which are fast instantaneous reactions to sensory information  , not much thought or processing goes into these actions

    b)      Thought out response to sensory information  : we assimilate the information understand it and respond after processing the information.

     

    Core 1 ( KCU ) in my architecture works like a) above as far as responses to sensory information is concerned as it deals with the  effect of ambient conditions on the drone along 3 axis like a reflex at high speed responses .

     

    Core 2 ( IHOU ) is like b) : It handles the higher level processing like navigation , APP expansion etc . this processor need not necessarily by ATMEGA 328 it could be any processor like M3 , M4 depending on the requirement of the application.

     

    As things progress yes , the architecture will get more specific for applications as it does in nature by evolution of the drone space.

    Joseph thanks once again:)

  • I trust a lot in your forward thinking idea. Keep me updated!

    • Sure will please ping me at venkat@muav.in

      Venkat

  • This makes a lot of sense from a couple of viewpoints. On a practical design level, multicore CPUs of even lower spec should perform better on short independent loops than a higher spec processor (assuming proper threading of software). There are several paradigms in biology, the most obvious being reflex nerve paths for a lot of low level 'processing'. Carrying this model forward, it would make sense to have cheap multicore processors handle well threaded software for control loops and perhaps having a slightly heftier 'brain' processor that handles route planning and obstacle avoidance, etc. This seems to be q natural extension of the dual trends toward more specific architecture for purpose and natural or accidental mimicry of biological systems. 3 billion years of system optimisation for survival is hard to argue with.
  • I'd like to ask a general question : if you're looking for high speed control loop, which would basically do some sensor filtering, PID regulators and generate servo signals, wouldn't a low cost DSP be a good choice in place of the atmega328 ? The point is that they are optimized from a hardware perspective to execute extremely fast the operations needed for digital filtering. Maybe the atmega is sufficient for this task, but why not use a specialized DSP ?

    • @Ben

      • One of the reasons are that when an atmega can do the job why a Dsp as any additional functions can be added as application via  interface anyway.
      • Since Low cost Dsp’s only have interface channels like SPI , I2C , USB ,CAN ,(like Intel Edison architecture) I cant use it as a core 2 , as along with interfacing I need to control gimbal via PWM , decode PWM rx data , decode analog ultrasonic data , which are core MCU functions , so what I will have to do is to have a slave MCU to DSP MP which will do this , this reduces efficiency n thru put which is bad for control computing . For data computing you can use it like a GPU , where a MCU is the main core , we create instruction sets on the DSP according to what we want to calculate , give values via a bus like USB to it and get the computed data .In this arch. , the thruput and H/W flexibility of MCU is not hindered.
This reply was deleted.

Activity

DIY Robocars via Twitter
How to use the new @donkey_car graphical UI to edit driving data for better training https://www.youtube.com/watch?v=J5-zHNeNebQ
yesterday
DIY Robocars via Twitter
RT @SmallpixelCar: Wrote a program to find the light positions at @circuitlaunch. Here is the hypothesis of the light locations updating ba…
Saturday
DIY Robocars via Twitter
RT @SmallpixelCar: Broke my @HokuyoUsa Lidar today. Luckily the non-cone localization, based on @a1k0n LightSLAM idea, works. It will help…
Thursday
DIY Robocars via Twitter
@gclue_akira CC @NVIDIAEmbedded
Nov 23
DIY Robocars via Twitter
RT @luxonis: OAK-D PoE Autonomous Vehicle (Courtesy of zonyl in our Discord: https://discord.gg/EPsZHkg9Nx) https://t.co/PNDewvJdrb
Nov 23
DIY Robocars via Twitter
RT @f1tenth: It is getting dark and rainy on the F1TENTH racetrack in the @LGSVLSimulator. Testing out the new flood lights for the racetra…
Nov 23
DIY Robocars via Twitter
RT @JoeSpeeds: Live Now! Alex of @IndyAChallenge winning @TU_Muenchen team talking about their racing strategy and open source @OpenRobotic…
Nov 20
DIY Robocars via Twitter
RT @DAVGtech: Live NOW! Alexander Wischnewski of Indy Autonomous Challenge winning TUM team talking racing @diyrobocars @Heavy02011 @Ottawa…
Nov 20
DIY Robocars via Twitter
Incredible training performance with Donkeycar https://www.youtube.com/watch?v=9yy7ASttw04
Nov 9
DIY Robocars via Twitter
RT @JoeSpeeds: Sat Nov 6 Virtual DonkeyCar (and other cars, too) Race. So bring any car? @diyrobocars @IndyAChallenge https://t.co/nZQTff5…
Oct 31
DIY Robocars via Twitter
RT @JoeSpeeds: @chr1sa awesomely scary to see in person as our $1M robot almost clipped the walls as it spun at 140mph. But it was also awe…
Oct 29
DIY Robocars via Twitter
RT @chr1sa: Hey, @a1k0n's amazing "localize by the ceiling lights" @diyrobocars made @hackaday! It's consistently been the fastest in our…
Oct 25
DIY Robocars via Twitter
RT @IMS: It’s only fitting that @BostonDynamics Spot is waving the green flag for today’s @IndyAChallenge! Watch LIVE 👉 https://t.co/NtKnO…
Oct 23
DIY Robocars via Twitter
RT @IndyAChallenge: Congratulations to @TU_Muenchen the winners of the historic @IndyAChallenge and $1M. The first autonomous racecar comp…
Oct 23
DIY Robocars via Twitter
RT @JoeSpeeds: 🏎@TU_Muenchen #ROS 2 @EclipseCyclone #DDS #Zenoh 137mph. Saturday 10am EDT @IndyAChallenge @Twitch http://indyautonomouschallenge.com/stream
Oct 23
DIY Robocars via Twitter
RT @DAVGtech: Another incident: https://t.co/G1pTxQug6B
Oct 23
More…