3+km HD FPV system using commodity hardware

Hi

Over the last couple of months I have been working on a project that might be of interest to you: https://befinitiv.wordpress.com/wifibroadcast-analog-like-transmission-of-live-video-data/

Basically it is a digital transmission of video data that mimics the (advantageous) properties of an analog link. Although I use cheap WIFI dongles this is not one of the many "I took a raspberry and transmitted my video over WIFI"-projects.

The difference is that I use the cards in injection mode. This allows to send and receive arbitrary WIFI packets. What advantages does this give?

- No association: A receiver always receives data as long as he is in range

- Unidirectional data flow: Normal WIFI uses acknowledgement frames and thus requires a two-way communication channel. Using my project gives the possibility to have an asymmetrical link (->different antenna types for RX and TX)

- Error tolerant: Normal WIFI throws away erroneous frames although they could have contained usable data. My project uses every data it gets.

For FPV usage this means:

- No stalling image feeds as with the other WIFI FPV projects

- No risk of disassociation (which equals to blindness)

- Graceful degradation of camera image instead of stalling (or worse: disassociation) when you are getting out of range

The project is still beta but already usable. On the TX and RX side you can use any linux machine you like. I use on both sides Raspberrys which works just fine. I also ported the whole stack to Android. If I have bystanders I just give them my tablet for joining the FPV fun :)

Using this system I was able to archive a range of 3km without any antenna tracking stuff. At that distance there was still enough power for some more km. But my line of sight was limited to 3km...

In the end, what does it cost? Not much. You just need:

2x Raspberry A+

2x 8€ wifi dongles

1x Raspberry camera

1x Some kind of cheap display

Happy to hear your thoughts/rebuild reports :)

See you,

befinitiv.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Developer

    I did some bench tests to see how some of the parameters affected latency.  I'm using TommyLarsen's images but I think what it does is consistent with befinitiv's startup scripts.

    Below is a table showing the "average lag" (of 3 ~ 5 samples, see far right column) and the parameters changed (changes from previous row shown in green).

    3702028589?profile=originalTo summarize:

    • my setup's latency (2xRPi, 2.4Ghz wifi, 1xAlfa on tx, 1xTPLink on rx) is pretty much always 170ms with occasional (good) drops to 115ms.
    • latency didn't change when using alfa vs tplink on the rx side (I didn't try changing dongles on the tx side)
    • reducing the resolution does not improve latency (this surprised me)
    • increasing the fps does improve latency.  At 30fps I'd often see 0.230 sec lag, at 48fps or 60fps 0.172 was common. There were a few tests (see peach squares above) when the lag was over 0.300ms with 30fps.  It was very repeatable but I don't understand why.
    • the biggest improvement came from increasing the rate of key frames (i.e. reducing the -g value).  This caused the lowest latency (115ms) seen to occur more often.
    • it's not clear if "retrans" affected latency
    Bitbucket
    • Thanks for the results! How exactly did you measure the latency? Using the stopwatch+screenshot approach?

      Currently I'm diving into the main source for the latency. So far I've found that there are most likely no frames "stuck" in the raspi encoder (in the sense of the decoder having a fixed latency of N frames until the first result appears (which is already a good sign). Next step is that I'll timestamp the NALUs (containing an image with a readable timestamp) upon reception to get the actual coding+transfer latency. Since this excludes the decode latency this should help to decide whether it makes more sense to optimize the encode or decode side.

      • It would be really great if you can get to grips with where latency actually sits in the system.  Traditional digital wifi approaches to fpv typically use gstreamer to gstreamer over an associated link and quite often latency builds up, I think usually on the tx side within gstreamer if the available sending bandwidth reduces or if there are delays due to packet loss or link disassociation.  Sometimes the link can start off with almost no perceptible latency but then over time can lag by a second or more.  Since using this wifibroadcast technique I haven't seen this at all, I presume because there is no gstreamer at the tx side.  I also used to get stream stalls using rpicamsrc, but didn't get them using raspivid through a pipe.  Raspivid seems to be a good source, and they've added lots of options for tweaking that can be very useful.

        At the rx side if using hardware decoding there are few bottlenecks to back up, and ways of reducing pockets that potential latency (reducing ring buffers, tcp buffers, disabling sync etc).  I believe that with enough trial, testing and tuning (like the great work Randy testing different configurations), we can come up with sets of settings that reduce latency to very good levels.  Being able to narrow down where the latency sits in the system would be a great help to target efforts.

      • Developer

        that detailed investigation sounds really good.

        • Ok, so I have some initial results. The method of obtaining them was:

          - Let a timestamp (aka stopwatch) run on the PC screen

          - Capture the PC screen with the Raspi camera and transfer the data

          - On the receiver (PC) parse the received data stream for (complete) NALUs and timestamp the reception

          - Convert the received NALUs to images and calculate the difference of the timestamp visible in the image and the timestamp of the NALU

          This method should give a rather precise number of the time needed from capture to image reception. It should cover compression, transmission and reception (thus excluding decompression and display).

          The results were:

          @2FPS: 620ms

          @30FPS: 190ms

          @48FPS: 110ms

          The 2FPS number seems to indicate that there is indeed a frame "stuck" in the encoder or the transmission chain and the encoding+transfer delay is around 120ms. Actually it looks very much like it is being stuck in the encoder. The numbers above have been taken with a retransmission block size of 8 (meaning that there are up to 8 packets "stuck" in the transmission). I also tried a block size of one where each packet immediately arrives a the receiver. This still indicated that the frame is stuck in the encoder.

          What next? Mmmh, I guess I'll take some time to think about the results and the next steps.

          • That looks to me like somewhere it's buffering the frames, and it won't release the data until the buffer is full. Maybe we should find out where this buffer is; maybe if we could find out the size of the buffer, and work around that... like to match the frame size to the buffer size, like add extra bytes to fill up the buffer. Those extra bytes could be OSD information...

            I think this buffer is on the WiFi side (network). It's still using TCP/IP to send the data, right? (even though it's broadcast) maybe we can change the protocol in the TX and RX to work with UDP, and send the data as soon as the frame is received...

            • No, it's not using using TCP. wifibroadcast spits out raw packets. There is no protocol.

              110 ms latency is already pretty good.

              • Good if it's at least 720p with decent bitrate.
          • Hi befinitiv, the 620ms figure might make sense if the sensor/encoder only 'releases' an image at the tail end of each scheduled frame.  In the case of 2fps an image is only released every 500ms plus the encoding/transfer delay makes 620ms consistent with the other figures.  It may be that the pi firmware takes the extra time to build up the image over 500ms based on some kind of sampling, or else it just takes the last sensor data at the 500ms and releases that.  Conjecture of course, but it would explain the differing latencies at different frame rates, rather than something getting stuck.  Also, have you tried this with every frame as an i-frame, to discount any potential lag while either side waits for non-differential frames?

            • I haven't looked at the code yet, but would it be possible to make the hardware spit out frames at 48 / 60fps and discard the ones we don't need to save on bandwidth?  Of course it would be best to prevent the encoder doing this altogether and it might then give even less than 110ms latency.

This reply was deleted.

Activity

Neville Rodrigues liked Neville Rodrigues's profile
Jun 30
Santiago Perez liked Santiago Perez's profile
Jun 21
More…