Single layer of 64 neurons with 8 input and outputs and learning rate set to 0.1. 

Training cases created for vertical take off. The Universal Approximate was constructed for smooth function:

CONTROL: Vn+1 --> Vn 

V is {motor1, motor2, motor3, motor4, Roll, Pitch, Yaw, zcoor}

Desired from simulator: 

{motor1, motor2, motor3, motor4, Roll, Pitch, Yaw, zcoor}

{0.701, 0.7, 0.7, 0.7, 0.500005, 0.491911, 0.500174, 0.051289}

NN calculated thru adaptive learning from a set of 105,000 test cases, offline, zcoor was normalized by a factor maxZ = 30:

Obtained from NN learning:

{motor1, motor2, motor3, motor4, Roll, Pitch, Yaw, zcoor}

{0.751359, 0.69765, 0.743422, 0.710697, 0.572931, 0.580407, 0.620613, 0.107281}  

First four numbers are the motors's RPM, normalized between 0 and 1.


128 multiplications and additions required + 10 multiplications and 4 additions for taylor series expansion of Sigmoid function around 0. 128 word or long word for the learning matrices. 

The motors' accuracy is around the second digit, which for a plane should be ok, but for quad copter is not enough!

I will recheck the calculations , but I am afraid the motor number needs to be broken into two sums in order to attain the accuracy.


1. I changed the 64 neurons to 128 and it did not change. The matrices should be randomized between -0.5 and +0.5 or lesser accuracy is obtained. 

2. 105,000 test cases for the learning took several minutes (less than 5) on an IMAC. Original test cases where 7000 but then entirety was repeated 15 times again i.e. 105,000 = 15 * 7000. 


1. Look at larger arithmetic accuracy.

2. Experiment with 2 layers of neurons 

3. Obtain theoretical accuracy for upper and lower bounds 

4. Break the motors numbers into a sum of two e.g. 0.701 will become (0.7, 0.1), then shit 0.1 to the right once to get the sum. 


Views: 353

Replies to This Discussion

7 sets of 1000 vertical flight data points (from simulator) were subjected to NN learning, 16 neurons 10 input/output, 0.1 learning rate, with 20 repeat application of learning for the entire set, total of 140,000 trainings.  See attached image for error analysis.

Mean error 0.05 plus/minus 0.03 std i.e. between 1 to 2 meters of error for x, y, z. 


1. Increase of neuron from 16 and up introduced more errors.

2. Increase of repeat learning from 5 to 20 decreased the Mean error from 15% to 5%. 

X-axis is the index for the take off data.

Y-axis is the corresponding error.

The error is calculated as in the case of the multivariate vector calculus to match the backward progpagation learning:

|ouput - desiredoutput| / |input|

|...| is the Euclidian norm.


I just finished an off-line training for a quad copter (at high RPMs) and the results were improved by reducing the error:

x-axis is number of OFFLINE test cases (18 tests each 1000 (input, output) tuples obtained from simulator) 18000 tuples of 10 dim arrays (10 inputs, 10 ouputs). The y-axis is the error calculated i.e. the Universal Approximater based upon single layer of 16 neurons with 10 inputs, recalculated the test cases and compared the results against the original values and came up with errors.



Standard Deviation:


I maxed the arithmetic precision (in Mathematica from 50 to 200!!) and the results did not change much. Therefore double in C++ is more than enough precision for highly unstable flying machine. 

A new step was added in the learning, it seems the learning is biased towards the latest test cases (perfect for INFLIGHT training) so for OFFLINE I randomly shuffled the 18000 test cases repeated the learning 20 times. An order of magnitude reduction in error was observed!  (total 360,000 learning cases)

As soon as I have my target platform decided, I will release the code in both Mathematica and C++ under GNU public license.


I forgot, I am using 1/1000s resolution for the calculations, I am wondering what is the resolution for the APM board to sense the sensors?



© 2018   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service