I had the Oculus Rift lying around and decided to do a little project today to integrate FPV video into a 3D environment. It's tempting to just make a surface and project the video on that, but I thought that it would be more useful to use head movements to switch between informative displays instead of buttons and see where things go from there. Since the computer will be in the field anyway, you might as well use 3G to download maps or use databases to enrich the FPV experience. Because the Rift is so immersive and the video is pretty much static, I didn't try to create any illusion (that'd lead to motion sickness), so just made it explicit that it's a projection of some sort on a surface.

This virtual ground station has prerecorded video projected on a screen with a supporting display on either side. One side shows some diagnostic information from the plane, the other shows the map and thus the location of the craft in the actual world, along with some navigational information. In theory, you can have any support information screen you want, it depends on the task.

I ended up using the UDK, because Unity requires Pro due to shader features on the Rift and this requires a $1500 investment (you get 4 months free for the Rift with Unity).  For non-commercial use, the UDK is free.

A remaining challenge is to how to get telemetry and the video into the 3D environment with the scripts at low latency.

There's another opportunity to make this more virtual and engaging by utilizing a 3D plane flying over a holographic projection table, showing the location of your plane in 3D above the map with a line demonstrating its intended route. You could even make that map 3D utilizing DTM's.

The challenge for VR is that interaction by the pilot with any application is limited and you can't hold on to three controllers at the same time. Rich interactions like many keystrokes on a keyboard are impossible. Nevertheless, it'd be interesting to see what can be done by game elements like room triggers and other interaction methods to basically add some more functionality to this environment.

Another limitation will be that tasks requiring precision (for example placing a waypoint) are rather difficult to do or take too much time. That'll take some time figuring out a useful interface for that objective.

The things I learned were:

  1. The Rift can be useful to construct a relatively large space full of information. In the end it's like having three monitors on a desk with poor resolution (like 320x200). The Rift does not have appropriate resolution for reading small text, so things have to be graphic, iconic and relatively large.
  2. You can move about in 3D space to essentially zoom in or out. A complication here is if you fly manually, you don't have hands left to control your position in the virtual station. For autopilots that's less of a problem.\
  3. Content which is now 2D on a desktop screen can be turned into iconic 3D representations. Like the plane example above, where you have a large "table" with a map from google earth on it. Other 3D iconic representations can be used to represent other events or actors, etc.
  4. Video perception is reasonable, but the screen itself (the object) can be made much larger to achieve better visibility. It would require moving your head about, but fatshark glasses for example require you to move your eyes around to see all the corners of the screen.
  5. This is an opportunity to get rid of OSD information obscuring the 2D display.
  6. There's no way you can use this technology on moving vehicles (motion sickness and disorientation)
  7. I expect that using the Rift to control the camera pan/tilt gives you motion sickness, because the latency is too high.
  8. (theoretically and philosophically): If you enlarge the space and attach elevators and other interfaces with databases, you can create a planning room in the back and make the tasks that a person is responsible for location dependent. So, depending where a person stands in the VR environment, they become responsible for particular tasks. That means you could in theory move about in this 3D space and take over from the pilot, or walk into the back, do some map planning and get back to the original task.
  9. There are other opportunities where you can take advantage of their location in VR space; for example establishing communication groups. As soon as you walk into this virtual circle, you get added to their radio channel.
  10. The ground station here is clearly a ground base somewhere. The coolest Virtual Ground Station (let's coin the term VGS) would be a model of a vehicle (star trek enterprise) with a projection of the video on the forward bridge window and supporting consoles/displays around the bridge that are monitored/controlled by your operators. The idea for the pilot to be actually onboard in a "virtual bridge" is a pretty cool concept. It then becomes more natural to think about other virtual elements, for example the planned vehicle route from APM and project that around the vehicle. That would allow you to look around and see intentions and all virtual artefacts (virtual representations of data in a 3D space, essentially).
  11. I reckon that the maximum time one can spend in the Rift is around 30 minutes, before you get too tired or slightly motion sick.
  12. And then your phone rings.....

Because the Rift is so incredible immersive, this environment really gives you the feeling of being there. in this application, the line between reality and virtuality is really blurring, because your interactions with this environment (or from outside of it) really has an impact in the actual world. That's not a statement that's pro or contra, but an observation that it's not yet 100% known how people should take care after having used the Rift. Should they take specifically designed breaks after 30 mins of use to "get back to the real world"? 

 

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Wow. Groundbreaking work on user interfacing, UAS operations (job "sharing"!), better situational awareness, and other topics. Perhaps voice control could be used to enlarge or shrink any object in the field of view in increments, depending on the situation, like "Enlarge control panel", "Shrink control panel". And, "Stop phone ringing". :-)

  • randy: like being in a ufo with the outward looking window shifting around the hull? Certainly possible. Video stabilization utilizes features in the image to correctly place the image on a surface. That'd be an interesting experiment. Probably it would give a feeling that the ufo flies a very stable and determined path, but the controller for the window is glitchy. Or the ufo is on fire and crashing maybe.

    A disadvantage is that it would still move around on large deviations, so you'd have to follow it if the direction mvoes too far off.

    I think utilizing only video stabilization it should be possible to update the UV texture coordinates of the image. That way it would cut off certain portions of the image (depending on the amount of vibration), but the image appears more stable. Thus, it would elminate the vibration/jitter effects, but not large deviations in course/pitch/roll.

  • Developer

    Re the head-tracking and nausea problem, I wonder if, along with the video, the camera mount returned it's current estimate of it's earth-frame roll, pitch and yaw angle if that would allow you to then project a virtual moving square screen within the Oculus.  In some ways this would be similar to the image stabilizated videos we sometimes see where the center of the image is unmoving but the edges appear to move all over the place.

    Through the Oculus too, the virtual box would move all over the place but you wouldn't get sick because the thing you're likely looking at wouldn't move much, just the edges of the screen would appear to be shifting.

  • Great job. I will follow your steps. Keep going.

  • Really nice functional analysis of using this imersive headset with a real time interactive application.

    Especially regarding some of the limitations and - difficulties - "Errpp!"

    Some of the problems would probably be reduced with the actual supposed release of the Rift's higher resolution production version.

  • This is a great idea. I have been working with just trying to get FPV footage to appear properly and use the head motion tracking to turn the camera. As you said, the latency is pretty high and can make a user pretty naucious. Keep up the work, I will follow with interest!
This reply was deleted.