Does anyone know of a relatively ready-to-use solution for sending serial data over an existing SD video stream? Maybe during the vblank section of each frame?
I'm hoping there might be an existing solution that would let me stream in serial data, encode it into the video, which would be transmitted and received, and then decoded out of the video, and sent to a microcontroller as serial data.
Or maybe I'll just have to bite the bullet and add another radio system on yet another frequency...
Thanks for any ideas!
Ben
Replies
http://www.compendiumarcana.com/vbi/
Im really thinking about creating some stuff... ill look for something ready, if I can´t find, imma put some time on it.
One year has passed, did anybody found a solution?
:)
So, another idea to avoid starting from scratch:
Use prebuilt closed-captioning encoder/decoder chips to encode data as closed captions in the video stream using an existing, standard data format.
So far I haven't found an encoder chip, but it looks like the Zilog Z86131 might be a great decoder for the ground side: http://www.zilog.com/docs/tv/z86129.pdf
Lew, any recommendations for a nice, simple, small encoder ic? :)
Thanks,
Ben
I propose a next-gen solution...
Looking at whats coming down the pipe, these folks now have a light weight low power hdmi transmitter with 4.9-5.9ghz spectrum and auto switching of frequencies if noise exceeds x amount...
http://www.brite-view.com/hdelight.php
tx/rx set is 159.99 with a low range (5 meter) transmitter.
Just think if this was at the 200mw or higher power on the tx side like we have today on standard tx'ers...
theroreticly shouldnt we be able to get the extra bandwidth for data over the new 802.11n (wireless hdmi) standard and bumping up the range with power/antenna combo like we do today with standard video transmitters and omni/directional patch antennas on .
Taken from my post in another forum...
If you do a little googling for EIA-608 and EIA-708 you'll discover the standard protocol for injecting "closed captioning" into a standard video stream (e.g., NTSC). The fact that it's already a widely-used standard is important, because that means custom IC's already exist for doing this. Basically, you have 960 characters of bandwidth to inject data into the Vertical Interval signal on line 21. Or, you could take the H and V outputs of the MAX7456 OSD chip and use the non-display lines on the H sync for this same purpose. You'll see that the Rembizi OSD already does this... he uses the vertical blanking interval to embed telemetry data. This way, you can still use your audio channel to monitor your engine for strange noises and anomalies which might indicate it's time to head back!
If you already have a separate telemetry channel, this would provide you with a redundant one. It would also provide you with a means of overlaying OSD info on the received video without having to hook it all up to the main ground control processor.
Some hints... see LM1881; see this schematic (of a ground station tracking antenna driven by embedded info in the video signal); see theDakarOSD project, in particular post #26 which states: "1) La salida y entrada de video si estan unidas. Este OSD no genera una señal de video como pueden hacer otros que usen placas como la BOB-4. Lo que hace es inyectar informacion en una señal de video. El LM1881 informa que punto de la pantalla se esta pintando, y segun esta informacion el PIC modifica el punto para generar el caracter. Esto es explicado de una forma sencilla. Debido a esto es necesario que este conectada una fuente de video, en nuestro caso una camara. Sin la fuente de video no se vera nada en la salida."
Translation (by hand, not machine): "Video input/output is consolidated on the board. This OSD doesn't superimpose a video signal like others that use the BOB-4. Instead, it injects the data into the video signal. The venerable LM1881 is used to keep track of what portion of the screen the video signal is currently painting (i.e., which line we're about to scan), and depending on the timing info received (i.e., where we are on-screen), the PIC modifies that point to generate the character. This is explained in a straightforward manner, gringo. Because of this, it's necessary that a video source be connected (i.e., the cam... to generate the required signal... otherwise no data will be output since there is no sync signal or horizontal lines). Without the camera, no data will be output (since it piggy-backs on the video signal, 'bro)."
From the LM1881 data sheet: "The vertical blanking interval is proving popular as a means to transmit data which will not appear on a normal T.V. receiver screen. Data can be inserted beginning with line 10 (the first horizontal scan line on which the color burst appears) through to line 21. Usually lines 10 through 13 are not used which leaves lines 14 through 21 for inserting signals, which may be different from field to field. In the U.S., line 19 is normally reserved for a vertical interval reference signal (VIRS) and line 21 is reserved for closed caption data for the hearing impaired. The remaining lines are used in a number of ways. Lines 17 and 18 are frequently used during studio processing to add and delete vertical interval test signals (VITS) while lines 14 through 18 and line 20 can be used for Videotex/Teletext data. Several institutions are proposing to transmit financial data on line 17 and cable systems use the available lines in the vertical interval to send decoding data for descrambler terminals."
Hope that helps. Over and out.
EagleTree use a technique based on the closed caption TV system for sending data over a video stream. It works very well and to the point where the video link is quite impaired. It is part of the OSD Pro system which sends data to the EagleEyes ground unit which then separates the data out for use with GoogleEarth and local storage etc.
The audio method is very limited to about 4800baud.
Peter
I haven't found much yet. The best I've found so far are the EZ products I linked to above. Another promising option is the EzUhf, which is a long range RC system that allows 2-way communication. But custom telemetry is not yet supported. Just their own products' telemetry. But the product designer/engineer sounded willing to support this in the future. Just not yet...
http://www.immersionrc.com/products.htm