Google Cardboard Head-Up Display for Android

3689695733?profile=original

Google Cardboard your Drone!

This is a new Android App for running a Heads Up Display on a SmartPhone using Google Cardboard glasses to hold the smart phone.

I have posted a free version on dropbox at https://www.dropbox.com/s/iysadmr511nvwlr/QtGstreamerGoogleCardboard.apk?dl=0

Your phone will need to have side-loading enabled to install the app. Please google on how to side-load an app.

This app will overlay the HUD on a video stream using GStreamer in two distinct panes for viewing in the Google Cardboard viewer.   The app supports both a single and dual video stream, and depending on the processing power of you phone, it' possible to have a stereo view using multiple cameras. 

The App will automatically listen for a MAV-Link data stream on UDP port 14550, the default UDP port for MAV-Link for the HUD telemetry items. To support both the video and telemetry, you will need to supply both data streams to the phone, or for a simple WiFi camera without telemetry, you can connect directly to the camera for live video.

Scenario #1: Simple connection to WiFi Camera

3689695854?profile=original

In the App video setup menu, (top-left toolbar) supply this string for the GStreamer pipeline:

souphttpsrc location=http://10.0.0.1:60152/liveview.JPG?%211234%21http%2dget%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21 ! jpegdec

This string is for the Sony QX10. For other cameras, you will need to know how to connect and get the live stream using a standard GStreamer pipeline. I would suggest experimenting with GStreamer on a PC first to discover the correct string, then use it in the Android app. 

For a Kodak PixPro SL10 camera, I found this string to work:

souphttpsrc location=http://172.16.0.254:9176 ! multipartdemux ! jpegdec

In the UI configuration menu (top-left toolbar) select, "Split Image" to get the same image on both viewing panes. This is for a single camera configuration.

Scenario #2: Companion PC (Raspberry PI) with camera and PixHawk/APM Telemetry

3689695773?profile=original

There is a blog post currently running about using the Raspberry PI as a Companion Pi 2/3/Zero, so I will not give details here on how to setup the PI to get telemetry via the PixHawk/APM telemetry port, please refer to those pages for instructions.

If you have a Raspberry Pi with a camera, you can use this setup:

On the PI:

raspivid  -t 0 -w 1280 -h 720 -fps 40 -b 4000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse config-interval=1 ! rtph264pay ! udpsink host= $1 port=9000

The "$1" argument is the IP address of your SmartPhone.   This command is for 720p, but you should consider the resolution of the device and adjust accordingly.  It makes no sense to send a 720p video to a phone with a 800x600 display.

In the Android App, configure the gstreamer pipeline with this string:

udpsrc port=9000  ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264

For telemetry, I suggest using MAVProxy on the Raspberry Pi, and send the UDP stream to the phone's IP address and UDP port 14550.  The Raspberry PI should be acting as a WiFi access point, and the phone configured with a static ip address.

If you want to display data on a ground station PC and the App at the same time, you can use MavProxy to split the UDP stream, and then connect to your UAV using Mission Planner. 

Here is a sample command that I use to split the data stream with MavProxy on the Raspberry Pi.

mavproxy --master=/dev/ttyAMA0 --baudrate=115200 --out=192.168.1.1:14550 --out=192.168.0.3:14550

In this case, the phone is at IP address 192.168.0.3 (static IP address) and Mission Planner is running on a PC at 192.168.1.1.  Check your phone's documentation on how to setup a static IP for WiFi.  It's usually in the 'Advanced' section of the settings in your phone.  You will also need to configure your Raspberry PI to have a static IP address for it's WiFi access point. The actual MAVProxy settings will depend on how you setup your PixHawk/APM, so choose the correct baud rate you have configured in the flight controller.

Another thing to consider is the processing power of your device. If you see a lot of pixelation, your device is too slow, so you will need to change the resolution of the transmission and/or the bitrate. Some smartphones are just too slow to display the video at full resolution. I have tested 720p on my Google Nexus 6 and it works fine, but on a cheap tablet PC, I had to slow it down and switch to 800x600.  

In part two of this blog, I will explain how to setup the dual video stream and more advanced configurations for high-power/long range FPV using an antenna tracker and the Ubiquity Nano/Rocket M5 ground station access points.

Happy Flying!

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Nice update to the app Patrick. On my phone a OnePlus One it is very difficult to edit anything in the settings. I can't scroll through the text for the pipeline so can't change the port number.

    Is it possible to edit this from a text file in the android system?

    It would be great to have a button to swap video stream from one to the other (two different UDP ports)

  • @Patrick

    This is great, thanks for sharing. Really looking forward to part 2 and getting longer range for my Sony QX.

  • Patrick, thanks a lot for this! awesome work!

  • @JB, Again, many great points. I'll be investigating the Oculus SDK. I would like to upgrade my phone also so I can play around with these more advanced devices. I see them all the time at work, as I work in the wireless testing industry, but don't get to play outside of testing the phones in chambers. 

    The nexus 6 has no problem with a single stream, even at 720p, or with MJPEG streams that are not H264 encoded, but when I try a dual H264 stream, 720p at 30fps,  it's pixelating pretty bad. I have not tried to benchmark to see where the bottleneck is, but I suspect it's just dropping frames, I don't think it's a network issue but a decoding. 

    I'II consider the mavlink suggestion you made concerning the head-tracker. Since I was using a Raspberry PI, I chose to use it to control the gimbals, but if there is an architectural reason to use a more general approach, it may make sense.

    I have not done anything with geo-tagging yet, but there is a thread here where someone has contributed some python scripts. It's just a matter of time and priority. What to work on first? So little time, so much to do.

  • Thanks for the feedback Patrick.

    It seems we share the same conceptual ideas!

    I see Android particularly useful as a GUI, more than a data processor atm. But both functions are possible of course. The biggest thing is the ever increasing capabilities of mobiles coming through over time, both in the level of code available and the performance of the hardware. The mobile industry is much faster and more successful at adapting new tech and features, simply due to the magnitude of money being pumped into them by big companies! Being "mobile" means it typically fits in UAV too...which is a bonus! ;-) So developing and app that can install and run on many(most) of these means one is not held back in development opportunities. 

    I'm a bit surprised that the Nexus 6 can't handle more throughput, and am wondering if the dual video steam is the cause. Are you running two separate ie 3D video streams, or one stream split into two views for the goggles? 

    Although ideally everything would run on the one ground Android device, maybe an intermediary step is required in the interim to support the level of features possible? For example a Jetson could potentially provide the processing power to combine the different video and data streams (including mavlink/OSD/GE etc) so that the Android device essentially only recieves a pre-processed video stream from it. This would make nearly any Android device "compatible" at the cost of adding another device (and latency unless it's on the aircraft itself directly connected to the cameras) to the system. 

    But for a Android device solution it might be possible and easier to integrate the video streams in the Oculus SDK rather than trying to use Android alone. Given that the newer (Samsung Note 4 with Samsung VR and above) are Oculus derivatives, there's the potential to use those devices in particular to perform the required tasks, without having to build everything from scratch. It also means some compatibility with Oculus running on PC etc. I own the Note 4 VR setup and it runs seamlessly for the most part, and the range of apps etc is increasing day to day. Plus the screen res (QHD) of these devices are pretty high, provided you can drive them, which makes it a joy to use instead of some of the analog FPV systems. According to here they should be able to handle 2048x2048@60Hz in stereo or 4k@30hz video on the Qualcomm devices.

    The handsets are fairly pricey (but double for everyday use which improve value for money) but the addon VR goggles are cheap at $99, which also include extra high refresh acc/gyros. There's also a 360 camera in the works which is meant to incorporate native headtracking. Potentially one would "just" have to add a mavlink OSD/overlay to take care of the functionality, as the 360 video performance would be handled by the manufacturer/Oculus setup etc. There's also some hacks to run Cardboard on the GearVR.

    BTW With the headtracking feedback to the Pi it might be worth adding custom mavlink msg over wifi/radio to control the gimbal via the PXH PWM outputs instead. Also out of interest did you ever get the geotagging working on the pi camera?

    I'm happy to contribute where I can and don't mind spending some time to get a handle on the dev. priorities and framework to make it operational.

    Regards

    Mobile VR Media Overview
    Contains information, requirements, and recommendations on how to use videos, images, photos, and other media to create apps, games, and experiences…
  • @JB,   Great points. Some of your ideas I am working on, like the head-tracker integration using the smart phone's sensors. I am using a UDP port to send the phone movements to the UAV to the Rasperry PI which then generates the gimbal servo signals using the DMA clock channels on the PIs output ports.  I have not included this yet in the app, but will as soon as I get it all working.  No need to invest in specialized hardware, like fatshark goggles with all that extra hardware to get the same functionality, it's already in your phone, which everybody has now.  Since Android has so much development work to support many of your ideas, it's a no-brainer.  Adding google maps is very feasible. I will be posting the source shortly to the community, so if you think you want to contribute, that would be great.

    Your point about 'slow' FPV is pretty much correct for this setup. I can get about 110ms latency at best, but it's still pretty good for general purpose multi-rotor flying, but not racing. Probably should stick with analog for low latency.

    As far as the hardware is concerned, to do much of what you are thinking about will required better performance than most phones will handle. My nexus 6 can display a dual video stream with telemetry, but only at 800x600. If I increase it to 720p it starts pixelation. Adding more things for it to do would bring it to its knees. There are a couple of vendors selling the cardboard setup for use with tablets, and the bigger screen would be useful for the extra items you mentions. Sure sounds like a fun development project. 

  • Thanks Patrick!

    Looking forward to the next part! ;-)

    Provided the latency is reasonable and the range high enough over wifi, this will be an excellent solution for "slow", surveillance type, FPV use.

    I have always been a supporter of using Android devices more with UAVs, both as a tablet GUI and VR goggle display. My "dream" is to have a F22 style augmented reality display, where depending on where one tilts their head, different virtual views are shown and different augmentation/OSD information can be shown.

    For example with a camera on a gimbal, one can control the camera view by looking around and the head tracking being sent to the camera gimbal (old idea I know), but then when one looks up you see the flight/mission planner screen, with aircraft position and data etc. Typically looking up at a blue sky is not as interesting or important, of course one can also use a switch to bring up the planner view as well.

    In forward flight, what would be cool is to use data from Google earth/maps to augment the camera view. One idea would be to "sync" the camera view (in time, aircraft position, viewing angle, FOV) with the GE view and overlay them over the top of each other. That way even if the camera view is disrupted, one could continue to "fly" in a simulated GE view, as this only requires mavlink and local GE data to work.  This could also be used  on angle limited video cameras (even without a gimbal), so that one can "see" around without the need of a gimbal for better situational awareness, or by switching between a forward and down looking camera. The live camera would only take up one section of the GE view, when it is available. The aircraft would become invisible to the user. Ultimately one would have hi-res 360cameras mounted top and bottom to achieve seamless live footage (like the Panasonic one).

    One could then also setup GE to display POI as required, including ADSB aircraft etc. with the benefit being that regardless of the camera setup onboard the overall "VR" component would function seamlessly by running it on a local device. On top of that it would be possible to add some RF visualizations as a layer, using SDR, including passive radar on the same GE visuals, live over 3G/4G weather radar/info, or other useful info like IR/NVIR/MSpec/etc, each triggered by looking at a particular area, or by activating that view, or by priority filter to display only relevant info for the flight as it happens. The potential visual augmentations are limitless...but a lot of work!

    I have seen that there are a few affordable Android based devices based on the Jetson X1 due to be released this year, including a new X1 Nvidea Shield tablet and some phones, which will likely have the performance required to run these things properly. Some are already running linux distro's as well.

This reply was deleted.