The Blade MCX being the most stable indoor flying thing ever invented buys most of the precision, but it still took some doing to get vision, sensors, & feedback good enough to keep it in the smallest box. The Blade MCX is 4 years old, but it had just the right combination of cyclic, flybar, & servos to be more stable than anything since.
The IMUs since then haven't achieved the same motion damping of the mechanical flybar. It's a mystery why the Blade CX2 wasn't as stable.
Got the last of the features on the tablet which were last done on the RC transmitter, years ago. Manely manual position control in autopilot mode. Rediscovered the ages old problem where velocity & direction can't be changed if a move is already in progress. It would have to stop moving & recalculate a new starting position of a new line to fly.
The algorithm was all part of a plan to make it fly in straight lines. Merely setting fixed velocities didn't make it fly in a straight line. It would need a much larger flying area for changing velocity to be practical. The war on straight lines was a long battle in 2008, comprising many blog posts.
As the tablet interface evolves, it's very confusing, with separate inputs for manual mode & autopilot mode.
The final leap in accuracy came from tapping the Blade MCX's integrated gyro to drastically improve the heading detection without increasing the component count.
Its heading is extremely stable, allowing its position tracking to be more stable than using the magnetometer alone. The improvement costs nothing, but would require more parts on copters with no analog gyro already installed.
That was purely the magnetometer without the gyro.
Another discovery with this system was pointing it 45' left of the cameras during calibration seems to be the optimum alignment for the cyclic phasing.
So far, these micro copters have proven the smallest indoor autopilot works, but what you want is a flying camera. Dreams of useful quality video from a monocopter were busted. The available cameras aren't fast enough. There were signs that they could be synchronized to the rotation by pausing the clock. The blurry images would then require a really fast wireless connection.
A camera on a micro copter would take serious investment in really fast, microscopic, wireless communication. All roads are leading not to building aircraft, but perfecting a camera & wireless communication.
There is a desire to put the autopilot on a ladybird or convert something big enough to fly a camera.
Many years ago, a fake test pilot noted that averaged sensor data produced better flying than lowpass filtered sensor data. Lowpass filtering was the academic way of treating the data because it got rid of aliases.
The fake test pilot also noted that jittery servos produced better flying than perfectly timed servos.
In all these cases, the noisy unfiltered data had less latency than the filtered data & glitching the servo PWM around 50hz conveyed more data than their normal 50Hz update rate allowed. Since there were no data points at an alias frequency & with enough amplitude which could cause the aircraft to oscillate, the reduction in latency was a bigger win than the reduction in noise.
Now a camera system has 2 cameras, each running at 68fps, limited by clockcycles. They're not perfectly timed or synchronized, so an image from either camera is captured at 136 unique points in time. A new position is calculated when each of the 136 frames comes in. This allows slightly faster position updating than if the cameras shot at exactly the same 68 points in time, without requiring more horsepower.
The velocity calculation has only a 1/13 second delay, is pure noise, but gives a much tighter flight.
Anyways, the dual 68fps system uses 90% of the raspberry pi with the ground station on. Without the ground station, it uses only 60%. The RLE compression generated by the board cams takes a lot less horsepower to decompress than the JPEG compression from the webcams, but is made up for in the higher framerate.
The dual cameras on a single pan/tilt mount at 320x240 70fps is probably as good as a cost effective system can get. Better results could be had from 640x480 or higher resolution at 70fps. That would take FPGA design & something faster than a raspberry pi. Webcams max out at 640x480 30fps, but higher framerate has proven more important than higher resolution.
Baby Vicon busted
The cameras in a 2 eye mount have a fixed convergence which can be hard coded. The cameras in 1 eye per mount have variable convergence which must be deduced from the servo angles. That couldn't be known as accurately as hoped. The Hitec HS-311 is the tightest servo known, but it's still not accurate enough.
If the cameras were on different sides of the room, so they always converged at 90 degrees, the problem would be solved, but that would require having a 270 degree field of view with no lights that could interfere with machine vision. The cameras have to be close together & on the same side of the room to make the lighting practical.