Does anyone know of a relatively ready-to-use solution for sending serial data over an existing SD video stream? Maybe during the vblank section of each frame?
I'm hoping there might be an existing solution that would let me stream in serial data, encode it into the video, which would be transmitted and received, and then decoded out of the video, and sent to a microcontroller as serial data.
Or maybe I'll just have to bite the bullet and add another radio system on yet another frequency...
Thanks for any ideas!
I'm looking for a way to send serial data from a UAVDevBoard to some kind of video data modem, which would embed the data into the video stream in a way that's invisible to humans viewing the video (maybe by hiding the data in the vblank section of each frame). Then on the ground side, have another video data modem that scrapes the serial data back out of the video, and sends it out a UART / serial port that I could read using something like an ArduStation to do antenna tracking.
The closest I've found is this pair of devices:
Which do most of what I want, except that:
- they take over the audio channel, which I'd like to preserve as an actual audio channel
- it includes its own GPS which I don't need, and sends its own data format instead of letting me send my own data.
I haven't found much yet. The best I've found so far are the EZ products I linked to above. Another promising option is the EzUhf, which is a long range RC system that allows 2-way communication. But custom telemetry is not yet supported. Just their own products' telemetry. But the product designer/engineer sounded willing to support this in the future. Just not yet...
EagleTree use a technique based on the closed caption TV system for sending data over a video stream. It works very well and to the point where the video link is quite impaired. It is part of the OSD Pro system which sends data to the EagleEyes ground unit which then separates the data out for use with GoogleEarth and local storage etc.
The audio method is very limited to about 4800baud.
Taken from my post in another forum...
If you do a little googling for EIA-608 and EIA-708 you'll discover the standard protocol for injecting "closed captioning" into a standard video stream (e.g., NTSC). The fact that it's already a widely-used standard is important, because that means custom IC's already exist for doing this. Basically, you have 960 characters of bandwidth to inject data into the Vertical Interval signal on line 21. Or, you could take the H and V outputs of the MAX7456 OSD chip and use the non-display lines on the H sync for this same purpose. You'll see that the Rembizi OSD already does this... he uses the vertical blanking interval to embed telemetry data. This way, you can still use your audio channel to monitor your engine for strange noises and anomalies which might indicate it's time to head back!
If you already have a separate telemetry channel, this would provide you with a redundant one. It would also provide you with a means of overlaying OSD info on the received video without having to hook it all up to the main ground control processor.
Some hints... see LM1881; see this schematic (of a ground station tracking antenna driven by embedded info in the video signal); see theDakarOSD project, in particular post #26 which states: "1) La salida y entrada de video si estan unidas. Este OSD no genera una señal de video como pueden hacer otros que usen placas como la BOB-4. Lo que hace es inyectar informacion en una señal de video. El LM1881 informa que punto de la pantalla se esta pintando, y segun esta informacion el PIC modifica el punto para generar el caracter. Esto es explicado de una forma sencilla. Debido a esto es necesario que este conectada una fuente de video, en nuestro caso una camara. Sin la fuente de video no se vera nada en la salida."
Translation (by hand, not machine): "Video input/output is consolidated on the board. This OSD doesn't superimpose a video signal like others that use the BOB-4. Instead, it injects the data into the video signal. The venerable LM1881 is used to keep track of what portion of the screen the video signal is currently painting (i.e., which line we're about to scan), and depending on the timing info received (i.e., where we are on-screen), the PIC modifies that point to generate the character. This is explained in a straightforward manner, gringo. Because of this, it's necessary that a video source be connected (i.e., the cam... to generate the required signal... otherwise no data will be output since there is no sync signal or horizontal lines). Without the camera, no data will be output (since it piggy-backs on the video signal, 'bro)."
From the LM1881 data sheet: "The vertical blanking interval is proving popular as a means to transmit data which will not appear on a normal T.V. receiver screen. Data can be inserted beginning with line 10 (the first horizontal scan line on which the color burst appears) through to line 21. Usually lines 10 through 13 are not used which leaves lines 14 through 21 for inserting signals, which may be different from field to field. In the U.S., line 19 is normally reserved for a vertical interval reference signal (VIRS) and line 21 is reserved for closed caption data for the hearing impaired. The remaining lines are used in a number of ways. Lines 17 and 18 are frequently used during studio processing to add and delete vertical interval test signals (VITS) while lines 14 through 18 and line 20 can be used for Videotex/Teletext data. Several institutions are proposing to transmit financial data on line 17 and cable systems use the available lines in the vertical interval to send decoding data for descrambler terminals."
Hope that helps. Over and out.
Wow, thanks Lew! This is hugely helpful.
Now it's time to dust off my rusty spanish, and dig in to DakarOSD. I'm hoping I'll be able to build this into my budding open OSD project...
You're welcome, Ben... glad I could be of help. Here's some more information you might want to sift through, in order to avoid having your OSD become obsolete. It's up to you to determine if the new approach is worth the effort. Again, this is taken from a post of mine in another forum...
With all due respect, many other chip lines may not be available in the future. What counts is that the MAX7456 is available today, and is an on-chip OSD solution. With the programmable ROM version, you can control pixels at the bit level (by designing your own sprites). Use of a SoC solution leaves the processor free to do tons of magic stuff. If you're worried about the MAX7456 becoming obsolete, start using the Fujitsu MB90092 instead. It features a 24x32 dot character matrix, 4096H x 1024V dots, two pages of display-mapped memory, and very low power (CMOS) consumption. You can stack texts, pictures and animations with it (as well as color control). Here is an example of it in a finished retail product, offered by some sentient beings.
Note that, in addition to the MB90092, there's also the MB90096, MB90097, MB90098A and MB90099. So, I'm not sure why you think the MAX7456 is one of the last remaining OSD chips.
On another note, here's a full schematic for an STM32F103 (with MAX7456) based OSD, which might help.
You might want to take a look at this PIC + LM1881 based OSD, also.
This is all just "food for thought." You need to do what you feel is best, for your own situation. Regardless of, your contributions are appreciated and this reply should not be interpreted as discouragement.