I Quote from APM 2.x release notes:
"We’ve tested it (APM 2.x) for months, including lots of flying, and it significantly outperforms the DCM used in APM 1.0.
It’s your choice whether you want to use the MPU-6000 internal sensor fusion or do it yourself in the main processor, but if you choose the DMP it frees up nearly 40% of the processing power in the Atmega 2560"
1) Currently as I see from the source, we extract Accel, and Gyro vectors from MPU-6k and perform DCM using (non-linear) complementary filter and AHRS after that.
So, how are we taking MPU-6k for an advantage, as I understand it does provide 'its own' sensor fusion (so that we don't need DCM or Kalman). We can use I2C with Compass + GPS + other sensors interfaced as required and perform sensor fusion inside of MPU-6K, thus making the code more easier to manage and leave more flash memory free!!
2) Do we have a choice to use MPU-6000 sensor fusion instead of external DCM? Is it already done and if so, how? As per the quote, then we can add more features in APM 2.6, for the next generation along with PX4 as well!!
Thanks for any inputs..
Thanks for your inputs.
That 40% improvement was stated by the APM 2.0+ developers, as per the link I quoted in the first post above. Don't yet know what was the source of that statement?
Does PX4 also use MEMS mpu6000 sensors?
Where did you get your figures like 40% from ? Have you profile AP ?
Im not disputing that some time is done doing all that fancy multiplications but i doubt it could be that much especially considering the Atmega has built in multiply instructions for fixed point that are reasonably fast. Im guessing without actually knowing that most of the CPU time is spent reading with all the handshaking and io with the various devices.
How does that work? How do we send Compass data, and more importantly, GPS data, into the MPU for internal fusion?
Anybody any views on this?
I am sure, some curious user-developers would have this question.