Hey guys,

a long time passed since the last update from OpenFPV.

Thank's to the community for the great feedback and special thanks to all people around the world who wrote me feedback mails, joined the project or donated money. I really appreciate your interest about the project.

It was very silent the last month about the project, the reason is very simple:

I do not want to ship crappy unusable software.

My vision for OpenFPV:

OpenFPV should be a simple intuitive application for all desktop platforms. Easily extendable, stable, well designed. And easily extendable was the biggest challenge. Do you want beat around with C++? Do you want build your GUI with Qt?

Imagine you can create a dedicated data channel on TX side with python and receive the data on RX side with simple JavaScript. Layout your UI with HTML5 Elements. If you want to go deeper, why not edit the GStreamer pipe by your own. Without recompiling the whole application. Create your telemetry module in no time.

The trouble:

It was insanely difficult to get the video played back inside the webkit browser engine without adding delay to it. It is not possible to do it with HTML Video, Flash, WebRTC. The biggest challenges are buffer settings. Event Flash cannot display the video fast enough (and this is the most stable technology for live streams inside a browser) for serious FPV flying. My OpenFPV workspace directory is full with prototypes, ideas, and more prototypes. After month of fighting with myself and my vision I came to a deadly simple idea. Convert the H264 stream to a format like MPEG-1 and decode it with JS.

Maybe you think: "What the ... ?"

Don't worry, that was my first though too. I started to write a MPEG-1 decoder with JS until I stumbled upon a unpopular project on github which is doing exactly this job. After integrating all the GStreamer pipelines, transcoding, sockets and web sockets and this library, I displayed the fist output inside the browser. Without high CPU load and a good quality. And the best thing: Without a noteworthy additional lag. Sometimes the simple Ideas are more difficult then the complex ones.

Demo:

I created from this Iron Bird the first prototype. And I decided to give you guys a little update about the project. I am excited about your thoughts and ideas.

- Tilman

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Developer

    @Daniel, ok thanks for that.  So a better approach would be to use some other image capture rather than the one built into opencv.  I've certainly seen at least one person do this so it's possible.  Thanks again.

  • @Randy When I ran OpenCV with the C920 on the RPI, it captured uncompressed images(YUYV or RGB, I think). OpenCV can only process images in a raw RGB format. So it would need to decode the H.264 or MPEG stream in order to run image processing. This makes the frame rate dependent on the decoding abilities of the CPU and available H/W Accelerators.  Since OpenCV is used on such a wide variety of platforms where CPU and H/W accelerators vary, it just defaults to stream raw RGB. It takes longer to stream the larger raw format, but it makes things simpler.

    I was unable to get the C920 stream H.264 or MPEG because I don't think OpenCV supports those decoders, but I may be wrong about this.

    Daniel

  • @Tilman - yeah I had the same problem with the Alpha so I am trying not to use it.  It will go to Channel 12 which does help.

    I use MicroTik Routerboards that have 1000mW wifi on board.

    http://shop.duxtel.com.au/product_info.php?cPath=25&products_id...

    I have 6dBi antennas so the EIRP is 4W which is the legal limit.

    On the ground this:

    http://shop.duxtel.com.au/product_info.php?cPath=23&products_id...

    RouterBOARD Products - online catalogue for MIKROTIK products - - RB912UAG-2HnD: RouterBoard 912 wi…
    Mikrotik - - RB912UAG-2HnD: RouterBoard 912 with 2GHz Dual Chain wireless

  • Hey guys,
    thank you a lot for the feedback!

    @Guy McCaldin Thank you very much for the donation, every penny helps me a lot to invest in new Hardware. As you can imagine,
    this project takes a lot of money to develop and is very time consuming. The latency on my test setup without the pipline into the browser is around 116 - 140ms. It increases over time, but I am sure that we can fix that. With the [h264 dec -> mpeg-1 enc > mpeg-1 dec] setup it is arround 200ms. But I am not done here. My goal is 116 - 120ms inside the wrapper. The release for FPV flying will be 2015 at the moment. The reason for that is, that we have to do a lot of work further optimize the trasnimission. But for observing telementry and/or the video feed, it should be possible this year.

    @Jerry Hatrick
    Haha no, but you make me laugh :) Sorry for my voice in this video, I't was early in the morning.

    @Marius van Rijnsoever
    Thank you! Nice Idea to control over 433Mhz. I would leave the telemetry on the 2.4Ghz channel to get a tight and a easy to integrate setup.

    @Stephen Gloor
    The problem with channel 13 is that my high power alfa 2.4 Ghz hardware dont support that channel. Maybe I have to change. What hardware do you use?

    @Tearig
    I will contact you for further discussion on this topic. Thank you for your interrest.
    Another thing: I will remove the mpeg1 enc / dec part if possible to get the latency down to 116ms. The problem here is that we need to decode the h264 stream on gstreamer side (with HW accel) and pipe the decoded data into a JS module. This module will the draw the frames onto a canvas context. I'm be honest, I am not the H264 uber expert and not a crazy low level dev. Are you have ideas how to do this or anybody else here?

    @johnkowalsky
    I think it should be possible to transmit the telemetry data over 3G. Maybe a 320x240/15 stream for observing. The bitrates for telemetry will depend of how much data you want. But < 10 Kb/s. Only a very rough prediction.

    best, Tilman

  • If the telemetry signal was embedded, and the position data was then sent, the environment could populate the base-station with data from either local drive or internet connection, (Google Earth). To illustrate this I put together a quick video of a flight made today with it synchronized, (I just did it on screen, pause start) to illustrate what I was thinking.

    so if the data signal drops below a certain level the fall back could be a simulated environment. It seems that using this to the advantage on the base station could enhance remote flight capability by using the resources available on the ground. 

  • Ah, there's this group who had a kickstarter to use 3G to both control and see video and further integration with oculus rift. Theoretically they said, you would be able to fly your drone anywhere you'd have coverage. I think it's a bad idea to integrate with systems you have no control over, which is why wifi would be my preference.

  • If you assume a reasonable quality where you're not looking at smudges onscreen and 720p, then a static scene would give you about 2 Mbps. If you move around your bitrate can peak right into 12 Mbps. On average with some movement I found that 7-8 Mbps is a nice setting, but notice the peaks. This is assuming VBR. If you use CBR, people have said it "trains" your network better to deal with the throughput and it wouldn't generate the peaks but just degrade the quality more. My camera was not able to degrade quality that gracefully and it would fail as bad as a variable bitrate scheme with the peaks.

  • Just out of curiosity. What kind of transmission rates are needed ? in kb/s ? I've been thinking as many of us I presume about using 3g for that, or at least for telemetry

  • Hi Guys

    I am also trying to make a digital video transmitter but at lower frequencies so it will peform better in NLOS (non line of sight). I have so far using a modified video transmitter operating at a frequency of 442 MHz that I can switch the channels by PWM. The idea was to use the method of OFDM to transmit data from an FPGA that did the compression and analog to digital converting. The camera that I am using so far is 680TVL CCTV analog camera.
    I found that an FPGA would be to demanding and expensive so I will have to try with a beagle black bone or something that you might recommend ?
    Btw my goal with this project is to make a digital video transmitter that takes a normal camera input and uses OFDM to have a long video transmisssion distance. 

  • hi tillman. very good you, can you share the code with github, so everyone can help everyone?

    Very good job, indeed

    max

This reply was deleted.