Part Two: Here is the original picture of the finished product:

3689618661?profile=original

This is the second part of a 2-part series on 'How to build a High-Definition FPV UAV using a Raspberry PI with HD camera, using a high speed WiFi link.

In my first post on the subject (located here), I discussed the parts I used, and how to install them into a Hobby King Go-Discover FPV model. 

In this post, I will discuss installing the Raspberry PI and the PI camera in the Go-Discover gimbals, and the software configuration for both the Raspberry PI and the ground station PC.

From the previous post, step 3 was completed by installing the Ubiquity Rocket M5 in the model.  Now onto step 4:

Step 4: Install the Raspberry PI and PI Camera

Here is a photo of the position of the PI in the Go-Discover model:

3689618887?profile=original

The PI fits nicely just behind the camera gimbals, with the USB and HDMI ports on top. In the right side you can see the Cat5 network cable attached. This cable connects to the ethernet switch, which is also connected to the Rocket M5 input port.  

The two cables shown on top are the servo control wires for the gimbals, which I have directly connected to channel 4 and 5 on my radio.  I am using channel 4 (normally the rudder stick on my radio. Since there is no rudder on a flying wing, this is a convenient channel to use to move left and right with the camera. I have not (yet) moved to a head tracker, but if you already have that setup, just assign the channels accordingly.

To install the PI camera, remove the stock plate from the gimbals (for a GoPro), and mount the PI camera as shown in this photo:

3689618926?profile=original

The PI camera case fits very nicely into the slot, and again I used a small piece of velcro to hold it down. You could use a couple of small screws instead if you want a more secure hold.  The two gimbals servos are also shown here. They are simple to install, just follow the Go-Discover instructions.

Here is a front view of the PI camera installed:

3689618962?profile=original

Here is the block diagram describing all the connections:

3689618829?profile=original

Some comments on my previous post suggested that it is possible to eliminate the ethernet switch and serial-to-ethernet converter using the Raspberry PI and a serial port on the PI. I believe this post describes how to talk to the PI via the NavLink, but in this case, I want to use the PI to bridge the connection from the ground station to the APM/PixHawk. Somebody please comment on this if you know more about it.   I believe it would require a TCP/IP to serial link from the PI to the telemetry port on the APM, and some software on the PI to act as the bridge.  The main connection to the ground station is via the Rocket M5 and TCP/IP, not through a telemetry link (900 Mhz or Zigbee like I used on my other models).

Step 5: Getting it all to work with software configuration (the really fun part starts now).

Check out this post on what others have done with streaming and the PI.  My experiments showed that using GStreamer on both the PI and on Windows gives really good results with very low latency, if you use the right parameters. 

Get GStreamer on the PI by following this blog.   This is the same version of GStreamer that I am using on my setup. 

Make sure your PI camera works ok by plugging in the PI to a standard monitor using the HDMI port and follow the instructions on the Raspberry PI website on how to get the camera up and running (without GStreamer).  Once you have a working PI and camera, you can then proceed to stream things over the network.  

Note: It is suggested that you first get the PI streaming video by plugging it directly into your local network where you can also connect your ground station PC with the correct IP addresses (without the Rocket M5).   For my PI, I picked 192.168.1.2,  and for the ground station, 192.168.1.1.    Make sure you can ping the PI from your PC and the PC from the PI.  

For streaming, you will also have to make sure all the ports you intent to use are open on the firewall (described later).

For the ground station PC,  you can download GStreamer here.  Make sure when you install, select to install everything , or full installation (not the default). 

Here is the command I use for the PI to pipe the camera output to GStreamer:

raspivid -t 0 -w 1280 -h 720 -fps 30 -b 1700000 -o - | gst-launch1.0 -v fdsrc ! h264parse config-interval=1 ! rtph264pay ! udpsink host = 192.168.1.1 port= 9000

The command is explained as follows:

raspivid is the command to start the camera capture on the PI.  The -w switch is for the width in pixels, and the -h switch is for the height.  In this case, I am using 1280 X 720, but you can try any combination that fits your needs. 

The -b switch is the bit rate for the sampling. In this case I chose 1.7mbs to send over the stream. Again you can experiment with higher or lower values. This settings seems to work good for me, and the latency is almost unnoticeable.  

the "-o - |" is piping the output to gstreamer.  Make sure you include the dash before the pipe "|" symbol. 

For the GStreamer command, all the filters are separated with an exclamation point "!", as these are individual drivers that are part of GStreamer.  Since the PI has hardware accelerated video, the output is in a format called "H264", which is a highly-compressed stream. The GStreamer filters are configured to transport the output via a UDP socket connection to the target PC. Notice the 'udpsink' element which specifies the host - in this case your ground station, and the UDP port.  I am using port 9000, but you can use any open port on your system, but be sure to open the firewall or it won't work!  You can also use TCP instead of UDP, but for such a data stream, I chose to use UDP since dropouts are certainly possible, and with UDP this is ok, but with TCP, you could have socket problems and higher latency. 

Note: to get the PI to execute this command on boot, make a shell script with the above command and add it to your local.rc boot sequence. That way when the PI boots, you get the stream without having to log into the PI remotely. 

For the ground station PC, once you have installed GStreamer and opened the correct ports, use this command (from the command prompt) to view the stream:

c:\gstreamer\1.0\x86_64\bin\gst-launch-1.0 udpsrc port=9000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink

If all goes well, you should see the PI camera output on your PC screen in a popup window.  For those of you what want to use FPV goggles, you can connect to the HDMI port on your PC to display the output if your goggles support HDMI. 

I have this command in a batch file (with a PAUSE) statement at the end to keep the window open.

WHEW!  If you got this far, you are amazing. 

The last step to complete the build is to connect to the APM from mission planner.  The method I used to connect was to install a utility that converts a TCP connection to a virtual serial port, but I also think that directly connecting the mission planner to the TCP port will also work, however I have not tried it. I will post back later after trying it.

Here is the link to setup the serial to ethernet device to have an IP address and port.

Here is the link to the configuration utility for installing the virtual serial port.   

Once you have a serial connection over TCP/IP working to the APM, you should be able to connect with Mission Planner. On the maiden flight, it worked perfectly, and I didn't see a single drop in the telemetry data or anything noticeable in the video transmission, however my first flight was limited to 2km.

The last step is to connect the Rocket M5 to the Nano M5 and test everything using the OTA (over the air) connection. If all is well, you are ready to fly!  But be careful on your maiden, you just spent $700. 

Finally, here is a photo of my Antenna Tracker with the Nano M5 attached. My next update will include a video of a longer flight.  

3689618942?profile=original

Happy Flying!

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Moderator

    I saw that for example i have same iusse when use RPI for decoding at high resolution i have some iusse i don't have any problem on my i7 laptop ... are you investigate if there is the opportunity to use gstream with special driver for hardware decoding acceleration  ?

    Other question i try to record on board the stream ... i produce the file .h264 but i cannot convert it to mp4 i have an error 

     $ raspivid -n -w 1280 -h 720 -b 4500000 -fps 30 -vf -hf -t 0 -o - | \
         gst-launch-1.0 -v fdsrc !  h264parse ! tee name=splitter ! \
         queue ! rtph264pay config-interval=10 pt=96 ! udpsink host=192.168.1.101 port=9000  \
         splitter. ! queue ! filesink location="videofile.h264"
    I use this sintax ... if i use only raspivid for produce .h264 if i try to convert to mp4 i don't have problem and can view it in any mp4 player . Do you have some suggestion ?

  • Moderator

    Yes i know it's strange because I thought that  LG3 is last technology available on the market but i agree with you try my application and tell me what do you think about. As i told you i don't have problem on Nexus 4 but with LG3 i have it. 

    best

    Roberto

  • @Roberto,  most likely your phone can't keep up with the stream.  In my Nexus 6 it works fine with no latency, but my android tablet, it chokes badly because the processor is just too slow.  The Nexus 6 has a Qualcom 4 core 'SnapDragon' processor that is fast enough to keep up with the stream.   I don't think you can 'solve' that problem without reducing the resolution of the stream.  If I switch to 800x600, then the Android tablet can keep up, but not at 720p or 1080p.  It's just that simple.  

  • Moderator

    In our app have similar problem on gstreamer only on some cellular phone on Nexus 4 no problem on LG 3 we have some iusse ... we are working to solve it before to release it.

  • Moderator

    Hi Patrick ,

    i already implement the gstreamer in our VR Pad Station Pro with a lot of other advanced functionality ... 

    This is the like to an alpha revision .... https://drive.google.com/file/d/0B4DdeCEPBOEcZUtzYUQ0aGNMV3M/view?u...

    I try your QT App , but on my LG3 i have some problem there is a big latence and some times lost sync with image. If you want test use UDP on port 9000 and if of your android phone. 

    In our revision of code you can view the stream , recive mavlink for telemetry and control the drone by joystick .. connect mavlink in udp mode ... with ip of the terminal ... the only think that you cannot do is start the stream and stop the stream need our api on the drone ... if you use mavproxy it is already compatible with our app

  • @Roberto,   the heads-up display app is based on Qt (c++), but the android version has a java shell around the Qt core. I suppose there is a way to integrate these as separate activities in the same app with the Qt component being included. If you are already doing the video, I am not sure my stuff would benefit you unless you want a HUD.  Do you have a working HUD with GStreamer now? How are you integrating GStreamer into your java environment?  I decided to use Qt since there was a library available which I modified for Android.  I have not done anything with 3D/multiple cameras. I would be interested to see what you have implemented. 

    For the distance testing I have been doing, to get 1+ mile, I am using an antenna tracker with 16dbi helical antennas with the 500mW 802.11 setup.  If I have the Rocket M5 on the UAV, as with the planes, the theoretical range is 10+ miles or more depending on the quality of the GCS tracker and antenna.  Eventually, I will try longer distances it when I get someone to help spot the plane so in case I lose it, I can recover it.

  • Moderator

    Hi Patrick ,

    i'm using the Alpha Network dongle on board ... and i'm testing this two option  :

    http://www.alfa.com.tw/products_show.php?pc=137&ps=225

    http://www.alfa.com.tw/products_show.php?pc=137&ps=208

    It work very well i don't already test until 1 mile+ but only 400 - 500 meters. Me too i need to compile the driver and upgrade the driver on RPI2 kernel :) 

    We implemented a special version of VR Pad Statio with new feature it is compatible with : mavlink , gstreamer video decoding , joystick input for mavlink ovverride , we are working on card board sdk for 3D vision :) 

    If you want to test our application i can send our alpha apk of VR Pad Station Pro ... i think that could be compatibile with your architecture need only to put in your os , our vrx-drone-code-control.jar 

    My skype account is virtualrobotix so if you want to speak live with me i'm available on skype .

  • @Roberto,  seems like there is some common goals here that we shouldn't try to duplicate. If you already started some development on this API, it seems silly to create another solution.  How soon do you think it will ready for release?  

    For the 802.11 setup, I also have this configuration on my multi-rotors.  I use the Rocket M5 on the long-range airplanes, but not the copters, it's too bulky.    On my hexicopter I am using this one:

    http://www.amazon.com/Amped-Wireless-Power-Adapter-ACA1/dp/B00DKX0N...

    But I had to compile the driver on the PI to get it to work, as the manufacturer did not support Linux, but there was a generic one available online.

    For my quadcopter I am using this one:

    http://www.amazon.com/Netis-Wireless-Long-Range-Supports-Antennas/d...

    I replaced the stock dipole antennas with two skew-planar-wheel antennas to give better multi-path performance and this one works to 1 mile+.

  • Moderator

    @Cap @Patrick

    our app use a api for control the start and stop  of streaming i prefered that solution instead to use other kind of approach , no i not already release the image of RPI 2 for tester but i'm sharing my work with a limit group of developer. I prefer to finisth the test before to release it. 

    The main difference respect of your solution Patrick is that i'm using a dongle on RPI in 802.11 AC for trasmit to router the digital streaming. So the solution is light . I'm start testing the Oculus ,too but are you using a stereo camera or a single camera and simulate the 3d by sdk with dual view on the smartphone screen ? 

    I'm starting to test it on VR Pad Station .

  • @Patrick very good point, sir. I also just noticed you're about 2 hours away from me. Going to shoot you a message here shortly.

This reply was deleted.