I know this statement will raise the question who is this guy coming and telling us that current day autopilots are all wrong. Well I have been flying RC Planes from the age of 10 and have been using autopilots from the early Ardupilot of 2009 -2010 vintage till the more recent ones from 3DR, DJI, Feiyutech including their cheap clones  over the last 5-6 years. I have been coding from the age of 15 and am now 21 years old.
 
Based on my experience with this wide range of autopilots I have come to the conclusion that hardware of majority of autopilots  are adapted from the world of data based computing made for processing huge chunks of predefined data and giving a appropriate notification or display. In the case of data based computing inputs are got from low response data source like Ethernet/internet or some sensor network, this  data is processed and outputs are either notifications or a display and in a few cases some very slow speed controls. Nothing where high speed control of a dynamic object is involved even on a single axis.
 
Hence  the question : are these processors/hardware  made for controlling a dynamic moving object with freedom across 3 axis’s like a drone??
 
After using all types of available autopilots I realized that the fundamentals of  drone control at its core requires the following steps to be done repeatedly as fast as possible
1. reading  sensor values and conveying them to the controller/processor
2. filtering  these  sensor values
3. pushing the filtered values  into a PID loop
4. transferring control commands to the actuators for immediate action.

This cycle needs to be repeated over and over again the faster the better . This is what determines the stability of the drone the higher the cycle time the higher the stability .So what is needed in the case of drones is a continuous high speed input –output action reaction control system. I realized that drone control is not so much about data crunching as about speed of the control cycle.

If the use of  drones has to grow developers have to be given freedom to code for their applications without compromising this core control cycle. In the case of drones a developers code resulting in a system hang will result in catastrophic outcomes like either crashs or fly aways, both which have been regularly reported in current autopilots. Achieving high control cycle speeds & isolating the flight controls is not possible with the current architecture of sequential processing, hence the future of drones is limited by the architecture of
currently available autopilots.

So unless a new thought process emerges drone use cannot grow exponentially. What is needed is a motherboard that it radically different from anything available today.


I have been working on this for a while now and my first hand experience is that the moment I shifted my focus to achieving higher speed control loops with my self  designed autopilot the level of stability and performance I have been able to get are awesome even in very high costal wind speeds on a small 250 mm racer. I achieved this with the most primitive of micro controller used in the first ardupilot  the  ATMEGA 328. Things got even better when I replaced the MPU 6050. IMU with the MPU 9250.

With my custom made Distributed Parallel Control Computing Bus I have been able to achieve Altitude hold with a total drift  accuracy of less than 1 meter  and a very accurate heading hold as well as GPS navigation on the 250 mm  racer. All I did was to add another ATMEGA 328 in parallel to the first one to add on features.

Thanks to this I am able to completely isolate the core flight control loop from the APP development coding there by the drone is never compromised by faulty APP development coding.

Distributed parallel control computing I have found from my own experience is an architecture that really has the potential to create exponential growth in drone applications. I would be interested to know of any other ways by which others are trying to address this core unique control processing requirements of drones.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

    • @turdsurfer

      Yes , all of the above , i have experienced it in real time , if fact due to the system while testing i have had crashes(due to improper algos's) but never fly away's , even while navigation is turned on.

      Additionally, i have created an additional module for data logging , something like a black box recorder with a update rate of 50Hz (for real time analysis). http://www.muav.in/motherboard_zuppa_klik.php (the add on module supports SDK based camera trigger).

      Finally, just to add to the point you have made , It is also very easy to build apps on these type of system architecture directly via an interface .(refer to my reply to Lazer Developers , it has a java app for RP2).

      Drone Solutions
      Muav.in serves to be the best and reliable drone makers in India at affordable prices.
      • FYI , i will be releasing all the APP libs open source , after i manage to get in a few more features.

  • There are still limitations of the hardware that the FC is sending commands to. Ex. current ESC's based on PWM need to be replaced. 

    Then there is the frame design, motors, props.....many variables. 

    I've had this Y6 for about 1 1/2 years. I took a video of it the other day because someone at another forum said Arducopter isn't stable in high winds. With this copter I don't worry with 30 mph winds. How stable does it really need to be?

    • oh yes, Y6 for some reason seem to be very stable in high winds... I remember the day I've maidened my first Y6 (7kg) wind started building up right when I got into the field so strong that I was seriously debating myself weather I should call it off....  the moment it took off I immediately understood that is was the most stable platform I've ever flown. (flight controller is AUAV-X2)  

      and it still my workhorse.

      later in the evening this picture was taken: https://goo.gl/photos/uhfE9hc6ZqfiL12z6

      New photo by Artem Nikitin
  • Sounds like you've done a lot of work! I think a few things need to be clarified though so here are my 2 cents:

    • Faster updates rates are better when using discrete sampling since the underlying continuous system can be better approximated - this is particularly true for highly dynamic vehicles. However, you tend to see diminishing returns.
    • There is a maximum speed you can send the commands to the actuators and the speed with which the actuators can make a meaningful input depends on their dynamics (rotor inertia, servo speed etc).
    • In my opinion, separating the flight critical tasks from non flight critical on separate hardware is a good way to continue development while maintaining reliability. This is why companion computers are gaining traction.
    • Although parallel distributed architectures can have merit, in my opinion, separating flight critical tasks on separate hardware decreases reliability due to increased complexity and having multiple processors that could cause a system failure.
    • Assuming the hardware can run the estimation and control algorithms sufficiently fast and the vehicle is well designed, I'd say the main contributor to performance is the algorithms themselves and the quality of the sensors. This is not only the algorithm architecture, but also their (correct) implementation. Sometimes the algorithm may seem like its working, but the performance may be inhibited by a small bug in the implementation.
    • An EKF is not a type of SLAM algorithm, however an EKF can be used within SLAM as the data fusion mechanism (as can many other algorithms). An EKF is an extension of the Kalman filter which is used to estimate quantities by fusing sensor data. Many other variations exist, including unscented Kalman filters, sigma point etc. In the context of UAVs, the EKF normally estimates position, velocity, attitude etc however it could estimate much more, such as thrust, wind, vehicle drag coefficients etc.

    • @Dan Wilson:

      ”Separating the flight critical tasks from non flight critical” , Yes that is the purpose of my architecture , But I Would coin it as ”isolating the flight critical tasks from non flight critical”. What I Provide for access to the KCU(Core 1) is a Bus interface similar to USB , so the IHOU(Interface and Higher Operations Unit , Core2) only gives control via the interface (Something similar to a mouse giving commands to a PC’s OS to move pointer) . The access of the CORE2 to the CORE1 functions are limited to a specific instruction set(like a mouse’s driver software) , hence even any erroneous code in CORE2 will not reflect on CORE1 in terms of time dependencies or instruction execution as the access instruction set is predefined and limited and we know exactly how much time the access should take, so even if BUS gets hung on access(very unlikely) , the CORE1 just withdraws after the allocated time for access is over (one of the reasons why a mouse works even if the PC’s OS hangs). Hence, the reliability of the system is very high without having extra redundancy  sets of sensors , gps’s or processors.

       

      Yes very early on in my development I realized that the algorithms need to be  perfectly working taking into account all the possible scenarios it , Like for example when I was designing and testing my special Semi – Cardioid Locus based cross track correction algorithm which corrects cross track from reference line using ONLY TARGET HEADING . I did not take into account the effect of our algorithm by wind and that if the (referencePosition.angleToTarget – currentPosition.angleToTarget )>50 the TARGET HEADING would have a 70 degs deviation from the currentPosition.angleToTarget  and it would start oscillating about the reference line . Then I had to add a smoother which gives controls based on rate of change of crosstrack error if (referencePosition.angleToTarget – currentPosition.angleToTarget )>30 . Then I developed a control experiment to test if it is working correctly along with real time data along with a JAVA based software simulator (with this algorithm no additional tuning is required for navigation , only a decent heading hold tuning ) . additionally, all my new algorithms(for navigation) is tried out on an autonomous car before implemented on airplane n multirotor as any small deviation is easily identified on a car running a tight navigation track than a flying object . So , yes a lot of control experiments and testing is done to make sure all algorithms work and sensors fusers work consistent in all scenarios including inside a fridge at -5degs (India is not a cold country , it DAMN HOT !! here ;-) ) as well as 60degs .

       

      Why I said EKF type of SLAM because as of now we use EKF only for fusing our local sensors (one’s that consider the drone as a free body in free space like IMU , BARO , GPS ) get a 3Dimentional idea of where we are in free space from our reference plane , but additionally if we add Environmental sensors which can map and create environmental variables like positions of objects in space , dimensions and distances from objects in our local space can be fused with 3D position  in free space to give very accurate local space and global space position actually merging both forming a single positional entity .This is basically Spatial Localization(our 3D free space position) and Mapping(position with respect to object in local space)  one of the means to do that is EKF , but we also using (Free Space model + Madwick algo), (Dijikstra’s node model + madwick algo) to compute this , but EKF theoretically speaking will do it more accurately as it also estimated dynamic params like x velocity vector , y velocity vector , heading rate vector etc.. As , mentioned earlier all my fusers for (imu speed + gps speed) ,(imu Z Vect + baro Z Vect) all are based on EKF , but are not implemented as blocks but as parts.

  • It's amazing how bright 21 year olds are , specially since that's only six years since you were 15. Just imagine how great you're going to be when your 27. Do that 5 more times and you'll be 57. Then you will look back and see how smart you really were.

    Somethings else to think about:
    air travel have a finite speed and blades turn at a finite speed. Mass can only be accelerated based upon a finite Force.
    Trying to make control directions before the mechanical action has even begun from the last correction it's hardly worthwhile.

    If you can only type 10 characters per second on a keyboard sampling the keyboard thousand times per second is not useful.

    We worked on the first lunar lander software, probably before your father was born, with very simple computers.

    You can make really fast control loops, with slow hardware if you understand and Nyquist sampling criteria and don't waste computation time when it's not effective.
    • This is why I like this forum... such interesting information from such smart individuals....

      its humbling....

      great post!

  • My 2 cents worth - on any given day I'll switch between the "infinitely parallel" languages of VHDL and Verilog to the sequential style of embedded code. This affects my intuitive view of how things interact to such an extent that I find the sequential operations of software, even when they are wrapped in a multi-threading RTOS, quite disturbing!

    In an FPGA system I would expect a task to be competed in somewhere between 2ns and 20ns and the abstract language used to design these mystical devices allows it to run an almost unlimited number of tasks at the same time.  In a processor based system the same task would take microseconds or even milliseconds and then it would need to do the next task and the next and so on. The cumulative time could add up to hundreds of milliseconds.

    It would be difficult to convince those of you who have trained your brain for so many hours to see things as sequential processes why there is such elegance in realtime parallel processing, but let me give an example:

    Try to perform any action like walking through a door or making a cup of tea by doing only one thing at a time. You can switch between tasks as often as you like so that you mimic an RTOS or you can complete a task before switching to the next one. For this experiment, you must consider every little action that you make including breathing, blinking, moving your eyes, moving any part of your body and thinking.

    This gives you some idea of how the world feels to a conventional microprocessor based system. Now repeat the above task as you would normally and you get an idea of how a decent FPGA system feels.

    From a practical perspective, conventional processing may well be "fast enough" to manage the things that most drones will be used for. But imagine the possibilities...

    • @Lazer developer :

      My architecture is individually sequential , but combined it is parallel dual core , real time (Right down to pico second) parallel processing system.

      As in :

      1)      When the sensor is being sampled in CORE1 , CORE2 is doing GPS Math completely independent (even without any hardware dependency).

      2)      When the  WriteMotors() function is called in CORE1 , CORE2 processes the GPS Interrupt .

      What I am trying to get at is individually it is sequential , but as a whole is a Hardware Layered RTOS , hence even though they are separate , their individual throughput is higher than the system as a whole.

      What are the advantage as compared to a single chip based Hw solution using same resources:

      1)      Since they are isolated and independent , no failure can occur on the main control core  due to H/W problem on Higher Operation unit (CORE2) .

      2)      If a person is developing an APP and is not very conversant with control events and H/W lines being used by AutoPilot , then any code developed by him can potentially lead in a system crash.

      3)      FPGA’s are very good in performance (I Have used to MOJO FPGA) , but expensive to procure and more complicated to manufacture reliably as if Pins are exposed to static during production the gates get discharged and the FPGA will be un usable.

      Like , for example I recently developed an app for a person , who wanted a flash light trigged from a height of 50m (like a flashlight), AS mentioned before I have developed libraries for embedded C++ and JAVA compatible for UART . What has to be done is connect a SOC like RP2 or BEAGLE BONE anything having JVM running on it via H/W UART port to the AutoPilot’s Core 2 UART.

      Have a look at the code attached , it is a live app I am working on , you can see none of the While Loops will cause any problem , and lets also assume suddenly if the RP’s Governor fails It will not affect the performance of the drone .

      As I have mentioned earlier " System Hang is not an option in the case of Drones , hence very early in my AP build I decided that drone flight integrity had to be top priority when making it APP expandable . The Developer should have the comfort in knowing that any programing errors will not result in a crash "

       

      Sample App.java

This reply was deleted.

Activity