Inspired and informed by the Raspberry Pi-based FPV project started by Philip, I'm putting together some software that will run a RasPi FPV rig, complete with an OSD based on data pulled from a Naza GPS module tap and an MCP3008 analog to digital converter for voltage and current.
It's based on the GStreamer media library, and will be designed to work with two Raspberry Pis – one on the airframe itself, and one as the ground station, with an Oculus Rift for the display.
The software's not yet functional, being still under heavy development, but I thought I'd make it available in case anyone else wants to join in the development, as my time's a bit limited.
Here's the Github project: https://github.com/monsieurpod/raspifpv
The software comes in two parts: A binary for the TX side, based on the airframe, and a binary for the RX side on the ground-station. The TX side is split into a number of modules: a main entry point that sets up GStreamer video transmission, and telemetry gatherer and transmitter. The RX side has its own entry point that sets up the receive functionality, along with a telemetry receiver, and a renderer to display OSD data.
The TX side is almost complete, missing only an integration of the Naza GPS tap module and a bit of spit-and-polish.
I've encountered some trouble with the RX side after discovering Gstreamer has a very inefficient pipeline for overlay rendering with Cairo, which is prohibitively expensive for the Raspberry Pi's processor. Instead, I'm investigating an EGL implementation that creates the EGL display context for GStreamer, then performs the OSD rendering in hardware. In addition, I need to find a way to apply a pixel shader (or whatever works) to the GStreamer video to split the video into left and right sections for display on the Oculus Rift. This work is in a very early stage; I'm using the SubtitleRenderer.h class from omxplayer for guidance, along with the eglgles GStreamer plugin (which seems to have been replaced by the new glimagesink module, which I've not had a chance to look into yet).
I plan to get some more work done over the next weeks, but if anyone wants to take a crack, please do – and do get in touch so we can coordinate.
Edit, Sun, May 11: Some more work done this weekend - I've got it to the point where it's building and running with the custom EGL display context. Just need to write the rendering code in egl_telemetry_renderer.c and find a way to split the display into left and right halves in gstreamer_renderer.c now.
Comments
@Juraj Kolesar Do you run the AirLive adapter on 5ghz? And is there not a problem with the raspberry usb power supply for the C920, I think this camera consumes 240 miliamp but the raspberry only provides something about 150 over each usb ... correct me if I am wrong.
Interesting! No, I haven't looked at the C920, but that sounds like an interesting candidate for stereo video (as the RPi only has the headers for a single CSI camera, sadly). Have you tried it and actually achieved sub-100ms latency over network? I'm kinda skeptical right now, as the RPi does have its own H264 hardware encoder too, so I don't quite understand the difference.
Do be warned that the project's not finished and doesn't actually work yet =)
Thanks for the answers. I think I am going to try your code.
Anyway did you checked the Logitech C920 cam? It should be working under 100ms latency, because it has its own h264 encoding chip inside. The RasPi than works just for networking and is not used as video encoder.
For connection I am using currently AirLive X.USB-3 adapter. It is dualband, with couple of external antenas. I didn't make a range test yet in field. but the indoor speed is fine. Ping is around 1.5ms through this adapter. Before I was using original WiPi adapter but it was pretty slow. Ping over 8ms and video streaming does not even successfully started.
I've measured 150-180ms latency, @Juraj - that's 720p RasPi to RasPi. 1080p I found has some serious framerate problems right now - seems like a bug in the receive module, but I've not looked into it as 720p's fully sufficient for the Oculus Rift right now anyway. Sub-100ms I don't think will be possible though!
Take a look at this link for the Naza GPS tap (it's the same link as in the article above)
Have you done any latency tests already? If yes what are the results?
I am playing with similar setup, but I am using APM2.6 instead of DJI.With 720p on RasPi cam and gstreamer I still had about 0,2s latency. On 1080p it was more than 3s. Did you make it somehow under 100ms. I think it is critical for real FPV.
And another question, how do you read data from the NAZA controller? Are you backtracing it, or you did find some documentation of DJI communication protocol? Last year I was playing with WookongM, but was unable to get any support from DJI or community around. After my WookongM copter unexpectedly fly away and definitely get lost I switched to opensource APM. But I am curious if DJI changed somehow their minds about developer support...
Very interesting project, excited to see some results. I also have a similar idea of using raspberry pi to stream telemetry data and a video stream from the onboard webcam, but my project is using 4G LTE network, but my suggestion is to use something like a Pixhawk along side with the raspberry pi, but if your strictly planning on using a Pi thats interesting too
I'll be following this very keenly since I have a similar idea. Unfortunately my RaspberryPI decided to stop working so I'm currently waiting for my new one before I will be able to start.
Very cool Idea ... those little RPIs sure are nifty little things.
Nice idea, @Tilman! I'll definitely be keeping an eye on things!
Cheers @Jonek! Here's hoping - I could sure use the help =)
The video is sent over RTP (all using a GStreamer pipeline). Telemetry is sent over a custom protocol over UDP.
Video system gets about 150-180ms of latency right now (measured by pointing the camera at the screen, and putting a timer in between em so you can see the readout on the timer and on the screen, then taking a photo). That's with S-Video output to an old TV right now, as I've found myself short of external displays. That's more than double the latency I measured from an analog setup (about 80ms), but I've not managed any better yet. It feels okay, though.
Graceful degradation is unsolved right now - I've been aiming at getting something working first. I was planning, however, on using the "tee" splitter in the GStreamer pipeline to split off and encode a lower-bitrate version to send in tandem. It's going to take a bit of custom GStreamer module coding on the RX end though, in order to detect the degradation and switch feeds. Probably more than I'm willing to do in the short term.
Range is totally untested so far! I hear good things about these HD digital links though, plus with a 900MHz radio it should have decent penetration. Time will tell.
That is a nice approach to two of my most dreamed of topics in FPV: OculusRift support and HD video! Thanks for sharing your project. I hope you get lots of support.
What are you planning as the communication mechanism between the flying Pi and the Pi in the ground station?
Did you measure the latency in your current setup (event in front of the camera -> display of the event in the Rift)?
From my own experience the most critical aspects in a setup like yours are 1. latency, 2. graceful degradation, 3. range.
Hi, your project looks awesome. You put a lot of energy in it. I know that you created own open source project, but I wanna invite you to OpenFPV. A initiative I started a few days before to get a lot of people together to get the best results out of the raspberry (or any other single board computer) for IP based FPV transmission. Please let me know if you wanna join... contact@openfpv.org