I had the Oculus Rift lying around and decided to do a little project today to integrate FPV video into a 3D environment. It's tempting to just make a surface and project the video on that, but I thought that it would be more useful to use head movements to switch between informative displays instead of buttons and see where things go from there. Since the computer will be in the field anyway, you might as well use 3G to download maps or use databases to enrich the FPV experience. Because the Rift is so immersive and the video is pretty much static, I didn't try to create any illusion (that'd lead to motion sickness), so just made it explicit that it's a projection of some sort on a surface.
This virtual ground station has prerecorded video projected on a screen with a supporting display on either side. One side shows some diagnostic information from the plane, the other shows the map and thus the location of the craft in the actual world, along with some navigational information. In theory, you can have any support information screen you want, it depends on the task.
I ended up using the UDK, because Unity requires Pro due to shader features on the Rift and this requires a $1500 investment (you get 4 months free for the Rift with Unity). For non-commercial use, the UDK is free.
A remaining challenge is to how to get telemetry and the video into the 3D environment with the scripts at low latency.
There's another opportunity to make this more virtual and engaging by utilizing a 3D plane flying over a holographic projection table, showing the location of your plane in 3D above the map with a line demonstrating its intended route. You could even make that map 3D utilizing DTM's.
The challenge for VR is that interaction by the pilot with any application is limited and you can't hold on to three controllers at the same time. Rich interactions like many keystrokes on a keyboard are impossible. Nevertheless, it'd be interesting to see what can be done by game elements like room triggers and other interaction methods to basically add some more functionality to this environment.
Another limitation will be that tasks requiring precision (for example placing a waypoint) are rather difficult to do or take too much time. That'll take some time figuring out a useful interface for that objective.
The things I learned were:
- The Rift can be useful to construct a relatively large space full of information. In the end it's like having three monitors on a desk with poor resolution (like 320x200). The Rift does not have appropriate resolution for reading small text, so things have to be graphic, iconic and relatively large.
- You can move about in 3D space to essentially zoom in or out. A complication here is if you fly manually, you don't have hands left to control your position in the virtual station. For autopilots that's less of a problem.\
- Content which is now 2D on a desktop screen can be turned into iconic 3D representations. Like the plane example above, where you have a large "table" with a map from google earth on it. Other 3D iconic representations can be used to represent other events or actors, etc.
- Video perception is reasonable, but the screen itself (the object) can be made much larger to achieve better visibility. It would require moving your head about, but fatshark glasses for example require you to move your eyes around to see all the corners of the screen.
- This is an opportunity to get rid of OSD information obscuring the 2D display.
- There's no way you can use this technology on moving vehicles (motion sickness and disorientation)
- I expect that using the Rift to control the camera pan/tilt gives you motion sickness, because the latency is too high.
- (theoretically and philosophically): If you enlarge the space and attach elevators and other interfaces with databases, you can create a planning room in the back and make the tasks that a person is responsible for location dependent. So, depending where a person stands in the VR environment, they become responsible for particular tasks. That means you could in theory move about in this 3D space and take over from the pilot, or walk into the back, do some map planning and get back to the original task.
- There are other opportunities where you can take advantage of their location in VR space; for example establishing communication groups. As soon as you walk into this virtual circle, you get added to their radio channel.
- The ground station here is clearly a ground base somewhere. The coolest Virtual Ground Station (let's coin the term VGS) would be a model of a vehicle (star trek enterprise) with a projection of the video on the forward bridge window and supporting consoles/displays around the bridge that are monitored/controlled by your operators. The idea for the pilot to be actually onboard in a "virtual bridge" is a pretty cool concept. It then becomes more natural to think about other virtual elements, for example the planned vehicle route from APM and project that around the vehicle. That would allow you to look around and see intentions and all virtual artefacts (virtual representations of data in a 3D space, essentially).
- I reckon that the maximum time one can spend in the Rift is around 30 minutes, before you get too tired or slightly motion sick.
- And then your phone rings.....
Because the Rift is so incredible immersive, this environment really gives you the feeling of being there. in this application, the line between reality and virtuality is really blurring, because your interactions with this environment (or from outside of it) really has an impact in the actual world. That's not a statement that's pro or contra, but an observation that it's not yet 100% known how people should take care after having used the Rift. Should they take specifically designed breaks after 30 mins of use to "get back to the real world"?