Part Two: Here is the original picture of the finished product:


This is the second part of a 2-part series on 'How to build a High-Definition FPV UAV using a Raspberry PI with HD camera, using a high speed WiFi link.

In my first post on the subject (located here), I discussed the parts I used, and how to install them into a Hobby King Go-Discover FPV model. 

In this post, I will discuss installing the Raspberry PI and the PI camera in the Go-Discover gimbals, and the software configuration for both the Raspberry PI and the ground station PC.

From the previous post, step 3 was completed by installing the Ubiquity Rocket M5 in the model.  Now onto step 4:

Step 4: Install the Raspberry PI and PI Camera

Here is a photo of the position of the PI in the Go-Discover model:


The PI fits nicely just behind the camera gimbals, with the USB and HDMI ports on top. In the right side you can see the Cat5 network cable attached. This cable connects to the ethernet switch, which is also connected to the Rocket M5 input port.  

The two cables shown on top are the servo control wires for the gimbals, which I have directly connected to channel 4 and 5 on my radio.  I am using channel 4 (normally the rudder stick on my radio. Since there is no rudder on a flying wing, this is a convenient channel to use to move left and right with the camera. I have not (yet) moved to a head tracker, but if you already have that setup, just assign the channels accordingly.

To install the PI camera, remove the stock plate from the gimbals (for a GoPro), and mount the PI camera as shown in this photo:


The PI camera case fits very nicely into the slot, and again I used a small piece of velcro to hold it down. You could use a couple of small screws instead if you want a more secure hold.  The two gimbals servos are also shown here. They are simple to install, just follow the Go-Discover instructions.

Here is a front view of the PI camera installed:


Here is the block diagram describing all the connections:


Some comments on my previous post suggested that it is possible to eliminate the ethernet switch and serial-to-ethernet converter using the Raspberry PI and a serial port on the PI. I believe this post describes how to talk to the PI via the NavLink, but in this case, I want to use the PI to bridge the connection from the ground station to the APM/PixHawk. Somebody please comment on this if you know more about it.   I believe it would require a TCP/IP to serial link from the PI to the telemetry port on the APM, and some software on the PI to act as the bridge.  The main connection to the ground station is via the Rocket M5 and TCP/IP, not through a telemetry link (900 Mhz or Zigbee like I used on my other models).

Step 5: Getting it all to work with software configuration (the really fun part starts now).

Check out this post on what others have done with streaming and the PI.  My experiments showed that using GStreamer on both the PI and on Windows gives really good results with very low latency, if you use the right parameters. 

Get GStreamer on the PI by following this blog.   This is the same version of GStreamer that I am using on my setup. 

Make sure your PI camera works ok by plugging in the PI to a standard monitor using the HDMI port and follow the instructions on the Raspberry PI website on how to get the camera up and running (without GStreamer).  Once you have a working PI and camera, you can then proceed to stream things over the network.  

Note: It is suggested that you first get the PI streaming video by plugging it directly into your local network where you can also connect your ground station PC with the correct IP addresses (without the Rocket M5).   For my PI, I picked,  and for the ground station,    Make sure you can ping the PI from your PC and the PC from the PI.  

For streaming, you will also have to make sure all the ports you intent to use are open on the firewall (described later).

For the ground station PC,  you can download GStreamer here.  Make sure when you install, select to install everything , or full installation (not the default). 

Here is the command I use for the PI to pipe the camera output to GStreamer:

raspivid -t 0 -w 1280 -h 720 -fps 30 -b 1700000 -o - | gst-launch1.0 -v fdsrc ! h264parse config-interval=1 ! rtph264pay ! udpsink host = port= 9000

The command is explained as follows:

raspivid is the command to start the camera capture on the PI.  The -w switch is for the width in pixels, and the -h switch is for the height.  In this case, I am using 1280 X 720, but you can try any combination that fits your needs. 

The -b switch is the bit rate for the sampling. In this case I chose 1.7mbs to send over the stream. Again you can experiment with higher or lower values. This settings seems to work good for me, and the latency is almost unnoticeable.  

the "-o - |" is piping the output to gstreamer.  Make sure you include the dash before the pipe "|" symbol. 

For the GStreamer command, all the filters are separated with an exclamation point "!", as these are individual drivers that are part of GStreamer.  Since the PI has hardware accelerated video, the output is in a format called "H264", which is a highly-compressed stream. The GStreamer filters are configured to transport the output via a UDP socket connection to the target PC. Notice the 'udpsink' element which specifies the host - in this case your ground station, and the UDP port.  I am using port 9000, but you can use any open port on your system, but be sure to open the firewall or it won't work!  You can also use TCP instead of UDP, but for such a data stream, I chose to use UDP since dropouts are certainly possible, and with UDP this is ok, but with TCP, you could have socket problems and higher latency. 

Note: to get the PI to execute this command on boot, make a shell script with the above command and add it to your local.rc boot sequence. That way when the PI boots, you get the stream without having to log into the PI remotely. 

For the ground station PC, once you have installed GStreamer and opened the correct ports, use this command (from the command prompt) to view the stream:

c:\gstreamer\1.0\x86_64\bin\gst-launch-1.0 udpsrc port=9000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink

If all goes well, you should see the PI camera output on your PC screen in a popup window.  For those of you what want to use FPV goggles, you can connect to the HDMI port on your PC to display the output if your goggles support HDMI. 

I have this command in a batch file (with a PAUSE) statement at the end to keep the window open.

WHEW!  If you got this far, you are amazing. 

The last step to complete the build is to connect to the APM from mission planner.  The method I used to connect was to install a utility that converts a TCP connection to a virtual serial port, but I also think that directly connecting the mission planner to the TCP port will also work, however I have not tried it. I will post back later after trying it.

Here is the link to setup the serial to ethernet device to have an IP address and port.

Here is the link to the configuration utility for installing the virtual serial port.   

Once you have a serial connection over TCP/IP working to the APM, you should be able to connect with Mission Planner. On the maiden flight, it worked perfectly, and I didn't see a single drop in the telemetry data or anything noticeable in the video transmission, however my first flight was limited to 2km.

The last step is to connect the Rocket M5 to the Nano M5 and test everything using the OTA (over the air) connection. If all is well, you are ready to fly!  But be careful on your maiden, you just spent $700. 

Finally, here is a photo of my Antenna Tracker with the Nano M5 attached. My next update will include a video of a longer flight.  


Happy Flying!

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones


  • @Patrick

    Ah, that's right, I forgot raspivid spits out H264. So, you don't need to worry about it. But, if you were using a regular webcam (say, with YUV) you could use the hardware 1080p encoder/decoder (OpenMAX) combined with omxh264enc to do all of the H264 encoding in hardware. I am testing that now with a USB camera, but I can't imagine it will be any faster.

    Anyway, I scrapped the idea of using the raspicam for a number of reasons, the two most important being the limited mounting options (short cable) and the lack of scalability (can only run one). I can run multiple cameras via USB (one vis, one FLIR), so this way the pipelines look essentially similar, and I can always bolt on more USB cams and know how to deal with them.

    Regarding LTE, the coverage in my area is very good, so I regularly see 20-30ms round trip times between LTE endpoints (I use one at the GCS too). I think wifi will be even better, just need to spend some time with it.

  • @Dan, this is the pipeline string I use on the PI:

    raspivid  -t 0 -w 1280 -h 720 -fps 30 -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse config-interval=1 ! rtph264pay ! udpsink host= $1 port=9000

    I am not using the omxh264 encoder. I think the GPU is producing the H264 stream in the PI, not the CPU. How would the omxh264 encoder work with the PI? 

  • @Dan, I am using 720p, but the PI can do 1080p with reasonable latency. When I did my tests,they seemed about 10 to 20ms longer. I don't use full 1080p because of my FPV goggles. I am using Sony HMZ-T3W HD Goggles which only go up to 720p, so no need to send 1080p when the display only does 720p.

    I set my PI to max out the CPU/GPU clock speed, and set the bit rate to 2Mbps and this gives me about 110ms latency. It's 130 ms with full hd.  What kind of latency do you get using the LTE modem? Seems like the network would add quite a bit.

  • I have used the MikroTik Routerboard at 2.4Hgz and one got good video from a C920 out to 4km in practice.   Unfortunately at the OBC the ethernet port came off the board as we were about to launch so I did not get to see how it went over the course.

  • if you looking for a more powerful wifi I would suggest to take a look at Mikrotik they also make a 5ghz (1300mw)and a 900ghz(500mw)

  • Cool - I started using the pi camera, but wasn't too impressed with the quality. I don't think the camera itself is outputting H264, rather relying on the TI openmax chipset to do it. You can also use that with any USB cam to do the heavy lifting, using omxh264enc in your pipeline. Obviously, the Pi is not going to be doing any 1080p encoding in CPU very well. I really need to do some more experimenting to see whether a regular USB camera coupled with the openmax encoder is any faster than the C920's hardware encoder - i'm just not sure yet. It was my hope that the C920 would do the hard work, and it does (and you can configure the hardware encoder's built in parameters - bitrate, quality, etc pretty easily), but lower latency would be nice.

    I am using a USB LTE modem - so far it works very well on the ground, adding only about 20-30ms of latency. However, in the air things get wacked out - but I think that's just ESC/motor interference - I have yet to test with an external LTE antenna, which I think will help quite a bit. I have a RocketM5 and a nanobeam sitting here on my desk with the intention of making the nanobeam into an antenna tracker, but tridge's feedback from the OBC kind of put me off to using 5.8 for long range. I intend to pick up some 900 gear to try, but I just wish they made something like the nanobeam in the 900 variety. 

  • @Dan, the camera I am using is the Raspberry PI stock camera. It's also an H264 stream and thus far I have not broken the 100ms barrier. I have a new laptop which is much faster than the one I was using and I will measure again to see if I get an improvement, but so far I have not heard of anybody getting sub 100ms latency performance.

    For the rocket, I am using the Nano M5/Rocket M5 combo which is 5.8ghz. The antennas I am using are dual 'Skew Planar Wheel' right hand and left hand circular polarization. This setup seems to work pretty good, but I have not gone beyond 3km yet.  I have another setup I will be trying soon which includes a second Rocket M5 on the ground with two helical antennas instead of the patch antenna built into the Nano. This should boost the signal from 3 to 5 db, but I have not tested it yet. I have not done any experiments with 900mhz or LTE. How are you using LTE, do you have a module or are you using a phone or ??


  • Patrick, what camera are you using, and were you ever able to measure latency? I have been using a C920, which can do the H264 encoding in-camera. However, the onboard encoder does seem to have some latency. I can reliably get about 120ms total (tested using the "point the camera at the monitor" method), but something less than 100ms would be optimal.

    Have you considered using a 900MHz rocket? That was my plan, but so far I have been having fairly good results with LTE onboard. I may still try the 900MHz and use the LTE as a backup.

  • FYI: If anybody is interested in using GStreamer with Mission Planner for a Head-Up display, I have posted a blog about how to set it up here:

    If you decide to try it out, please let me know if you have issues. This is a 'beta'.

  • Thought I had the script fixed the other day. Turned out my way of passing UDP to serial was a bit naive.

    Anyway got things sorted out and now it seems to be working really nice. There is only one version now that run two processes, one for UDP to serial and one for serial to UDP. Total CPU consumed for both processes seems to be less than 15%. Latency seems to be about the same as when connected with USB maybe a little bit worse, this is with the odroid and laptop connected to my home wifi network. For the odroid Im using a d-link usb wifi dongle.

    This is to get MavLink data from my Pixhawk to my GCS via a Raspberry PI - vizual54/MavLinkSerialToUDP
This reply was deleted.