Lots of things have happened since the last blog post. I've worked very hard to improve the stability of the "FPV" ground station software with excellent success. The first version 1.0 has gone live on the AppStore last week, after a long review time by Apple. I'm now already in the final testing phases to release version 1.1, due next week. This will have important performance improvements, as it eliminates the infrequent jerky playback and reduces CPU usage by 20-30% by preventing some colorspace conversions.
This version is a nice demonstration of the possibilities of a potent ground station hardware and a highspeed telemetry downlink. Although I'm using a simulator here, it could just as well have been an IP camera onboard a mobile vehicle.
As you can see in the video, I made an iPhone/iPad app (called "FPVNav") used for navigation planning. This uses waypoints as 'flight path handles' instead of points to fly through. An algorithm is used to calculate and draw a flyable route between and around these handles, according to the settings for each handle.
Waypoints in typical AP implementations do not explicitly state how the transition from one point or leg to another must be executed, which leaves room for uncertainty and ambiguity, and thus "interpretation". As you can see in these examples, there are a number of possibilities. You can follow the leg of one and exit or enter through the point, go through the point with a turn, follow the leg on both and "cut before" the waypoint in a turn with a specified radius or just circle around a waypoint.
The generated trajectory is sent to the "FPV" app as a message, which uses the data to display a virtual tunnel. In theory, if the field-of-view and the aspect ratio of the lens are matched, this virtual view should line up well. Barrel distortion is not compensated. It will be possible to integrate your own tools in this message exchange, as the specification and method for communication will be described on my webpage.
If you watch carefully, you can also see some limitations in the behaviour of these virtual cues when the vehicle pitches, rolls or yaws aggressively. These rotations seem to drive the virtual cues into the ground or lift them up. It is actually the video image which lags behind a bit (150ms latency vs. around 30-60ms of telemetry). Getting exactly the same latency on both links is going to be very difficult, especially because the latency is variable on wifi, beyond getting lost packets, so this is something that you have to live with. I'm considering relieving some of these effects by using transparency tricks to shift focus to either video or telemetry.
I'm already considering features for the next version. One thing that comes to mind is a total energy display used for landing approaches. This calculates the required energy of the vehicle and compares that to the current situation. It allows you to manage the throttle very efficiently, especially during landings.
The other is a cue which uses raw IMU information. Accelerations and rotational velocities can be used to estimate ahead of time what a vehicle's attitude will be and how exactly it will be moving through space. So, this cue will allow you to understand the dynamics of the vehicle better, because it shows you when to bank out of a turn, increase the turn rate, etc...
The apps in this video are connected through "Bonjour". This allows for a 'zero configuration' setup. Just make sure they're on the same network and they'll do the rest.