Developer

FPV setup with raspberry Pi

3689553118?profile=original

After much chasing, and testing, I have found this to be an efficient way of getting low latency high quality HD video out of an Aircraft. The latency is around 0.4 seconds at worst which would be OK for an FPV with an APM doing the hard work.

I will continue to search for methods to drop the latency down further, but this is a lot better than the 6-12 seconds I was getting on my first attempts.

Any comment (with useful instructions) would be appreciated.

For the wireless link, I am using two UBIQUITY ROCKET M 900 with Australian ACMA approved firmware, at the base station, I am using a tracking (yet to built the tracker...) 1.5 meter long X and Y polarised Yagi, and on the plane, two RF Design flexible strip antennas, placed at right angles to each other.

but how you do that bit is up to you.....

the critical bit is getting the Raspberry Pi's to chat to each other.

I have tried to make this as user friendly as possible... good luck.

 

Setting up IP video for Raspberry Pi 1080p video (FPV)

 

You will need 2 B model Raspberry Pi's and 1 Pi Camera. (Element 14, or RS components)

Preparing your Raspberry Pi for first boot…

 

Follow the instructions at http://www.raspberrypi.org/wp-content/uploads/2012/04/quick-start-guide-v2_1.pdf

Install the prepared SD card in the Pi and boot.

Setting up your Pi

Connect the Pi to your router with a network cable.

On Start-up it will resize the FAT partition and present you with a menu.

Set your language, and keyboard layout.

Select Raspbian… then click install.

After this has extracted (will take a while….) it will reboot into the configuration screen (again will take a while for this first boot.)

The important things to change here are

  1. Enable the camera
  2. In advance options…..
    1. Set the host name (camera, for the camera end, receiver, for the viewing end)
    2. Memory split, set the memory for the GPU to 256
    3. Enable SSH ( will come in handy later, as you may need to talk to the Pi in the air.....

Then finish and reboot.

First login

Username: pi

Password: raspberry

Setting up the required programs for video streaming

 

Install the dependencies by running the following in a terminal:

sudo apt-get install mplayer netcat

cd /opt/vc/src/hello_pi

make –C libs/ilclient

make –C libs/vgfont

cd /opt/vc/src/hello_pi/hello_video

make

cd ~

Now repeat this for the other Pi….

 

Streaming…

First set up the receiver….

Ensure the receiver is connected to your network and run

ifconfig

after you press enter, you can find your ip Address.  Note this down.

Then run the following.

mkfifo buffer

nc -p 5001 -l > buffer | /opt/vc/src/hello_pi/hello_video/hello_video.bin buffer

the Pi will now wait for the feed.

On the Camera Pi

Ensure camera is connected to the Pi

Ensure Pi is connected to the network (you can confirm this with ifconfig)

(see instructions at http://www.raspberrypi.org/camera for how to connect the camera)

 

 

In the following command, replace the ip address with the one you just noted down.

raspivid -t 0 -fps 15 -o - | nc 192.168.1.85 5001

if all goes well you should be streaming 1080P video at 15fps with less than 0.5seconds of delay..

now add your wireless bridge between the two, and away you go J

This information has come from the Raspberry Pi foundation website, and other sources, tested and proven by myself..

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Developer
    Spam away :) it's all heading in the same direction :)
  • Not with Gstreamer it won't - I've tried it and it's way sluggish at any decent frame size. But it'll do it in hardware quite happily, hence the EGL stuff. Still, if you can think of any more convenient means to render an overlay, I'm all ears! The EGL stuff's a pain in the butt.

    Here's a (still in-review right now) post about the software so far, a half-hearted description of what's left to do, and a link to the GitHub page: http://diydrones.com/profiles/blogs/beginnings-of-raspberry-pi-fpv-...

    It's probably worth reading that to figure out where it's at and where everything goes, but until it's reviewed by the Diydrones staff, here's the GitHub page: https://github.com/monsieurpod/raspifpv

    I'll post further stuff on that blog entry and stuff linked from it, rather than continuing to spam poor Philip's blog entry =)

  • Dear pod, could you share the link to the github page?

    Actually i belive, the RPi should have enough performance to overlay some stuff on the stream, just looking at xbmc, wehere thet works.

  • Frustratingly, not yet - I've been flat out in software development land for my company and haven't had any spare brainpower to devote to it. I'm hoping this week is gonna change that, as I'm coming to the end of a big bout of stuff.

    The TX side is basically finished. I just need to complete the RX side, which involves some EGL programming to (a) add a pixel shader to the gstreamer video layer to perform the screen doubling to work with the Oculus Rift, and (b) add an overlay layer to render the OSD content. I did have it all implemented using GStreamer, but alas it's really inefficient and the RPi's not powerful enough.

    I'd be happy to pop the project as it is right now on Github if anyone wants to take a stab at it too.

  • Dear Pod, any news on the OSD front yet? I will try to set up suport for this project in the german fpv community forum. This could become a gamechanger for FPV. Just think of preconfigured RPi images for airborne and ground station, Hardware bundle recommendations and so on...

  • Differently look forward to it, I would love to test on my Phantom 2 and Phantom Vision!. Let me know if you need a some beta testing help!

  • Indeed, @dronebriz - I'll definitely be porting that. I'm currently working on the OSD component, as it turns out Gstreamer's Cairo elements simply don't work right on the Pi because they do everything on the CPU, and the Pi's too slow to cope. I'm having to delve into EGL for the rendering which is a bit of a drag.

  • @POD I'm guessing you know about the NazaDecoder libary on the Arduino I have been wanting it get it ported to see it ported to the PI!

  • Still in development, @Aytek, but going well! I plan to open source the result. It's going to use an MCP3008 ADC chip (wired to the GPIO pins) in conjunction with a voltage/current sensor, and a tap on the NAZA GPS module to pull out location & bearing information to display an arrow at the top. Code is in C using Gstreamer for video transfer and Cairo for OSD rendering.

  • @POD any update about your OSD project ?

    Cheers

    Aytek    

This reply was deleted.