Posted by David Sprague on October 1, 2009 at 7:39pm
(Chris, feel free to move this discussion to a different section of the forum..don't know if you want to do a separate section for these forums or whatever)I'm looking forward to working with Remzibi on this :-)Chris, it would be helpful to me to understand what you're goal is for this project. I'll toss out a couple of ideas I've had and some background on related technologies.This capability could be used to permit a system with an autopilot and FPV that sends telemetry from the aircraft to the groundstation/laptop via the FPV video signal rather than requiring a separate XBee link? This would only be a one-way data link, of course, from the aircraft to the ground.This would also permit you to move the video overlay function to the ground unit if that was desirable (i.e. send telemetry and the camera video in a single video channel and then do the overlay on the ground rather than in the aircraft which would save some weight/space in the aircraft. This might also address the situation where the autopilot and the OSD both want to connect to the same set of sensors (gps, airspeed, etc).Here's some background on combined video/data technologies from my own knowledge -- I worked on some of this technology a long time ago -- and wikipedia. There are at least two different services that currently transmit ancillary data in the (non-visible) lines of a standard NTSC or PAL video signal. Those non-visible lines are referred to as the vertical blanking interval (VBI).In the US, the Closed Captioning service uses line 21 of the VBI to send text subtitles along with the video and audio. It has a data rate of 960 bits per second. Most if not all NTSC television receivers decode the closed captioning and can optionally display the text on the screen overlayed on the video images. You can specify the position on the screen where you want to the text to appear but I believe it is a fixed character set.In the UK a service called Teletext was developed in the UK and, as I understand it, is/was in use throughout Europe to embed digital data for information services in the PAL video transmissions. This standard supports significantly higher data rates than the Closed Captioning service -- up to 7k bits per second per line used with the potential to use up to 17 or 18 lines in the VBI. I believe that many PAL televisions include hardware to decode and display the teletext data with the ability to overlay it on the main display.For our purposes, of course, we would like to insert serial data from the autopilot or other airborne device into the transmitted video from the aircraft and then extract that serial data stream from the video receiver on the ground and feed it into the groundstation, laptop, or other hardware. We might be able to use existing Closed Caption or Teletext solutions or just do our own insertion and decoding of data into the video signal using custom hardware/software.As a next step, I thought I would check out video transmitter/receiver hardware used for FPV to see if the components they use happen to support VBI data insertion/extraction. Any favorite video tx/rx hardware I should look at first?Comments welcome!Dave
You need to be a member of diydrones to add comments!
Poke Poke, Bump Bump. I am checking to see where this Project left off. Did it get spun up into something else? I would like to be able to interface with the APM (2.6) and send telemetry data (for use at ground station) over the existing video link. I have been searching and have not found a good solution as yet. Advice welcomed!
The second project the guy implemented Remzibi first´s option, he created two chars for 0 and 1 bits and a char for the start bit.
At each frame he could transmit 2 Bytes, without errors 60B/s at 30 fps. That sound enought for GPS,mode and attitude data.
The two problems one might find are:
The sync separator:
In the first project the guy used at first LM1881 but he couldn´t split the signal properlly, so he used the EL4581C.
In the second project the guy had the same problem but he changed the resistors values and then it worked.
The bit 0 and 1 voltage level:
In the first project the guy made a comparator using two OP AMPs, that sounded very unreliable.
The second project the guy connected the video signal to a transistor and it worked fine.
If anyone could implement the Max part, Im able to try to decode the signal cause I already ordered the components too and I´m starting to build a PCB to decode the data.
I don't want to be a distraction to you guys but if the remzibi OSD is out of memory to implement the first idea, what about designing an audio modem that transmits telemetry data through the audio channel of the Vtx?
Would It make things simpler because you don't have to worry about PAL/NTSC stuff?
That said, remzibi's solution would be better because it means less overall hardware.
Also, you could ask some of the developers of the DakarOSD to get on board, i cant recall if their code is open source but i know their hardware is.
Cheers,
Nick
The telemetry in Video can be done on two ways .
First is to use existing ability of OSD of MAX7456 and define two symbols(fonts) in eeprom - representing 0 and 1 (ex, white and black square) , or maybe 16 graphics symbols representing 0,1,2,...D,E,F .
Then use a first(or last) line(usually covered by frame border of monitors or gogles) of OSD overlay to transmit any info .
That way can be used by OSD by his self or using Ardupilot and UART "$Mxxxxxxxx" message .
On the ground will be uC able to decode this info - the advantage is that if camera even will stop to work - OSD will still generate his picture without any transmition problems .
Second way is harder (need hardware modyfication ) - using H V outputs of MAX7456 chip - so then we able to use invisible lines on H sync to the same purpose (but only in time when they are invisible, rest will control chip by his normal way) , on ground the same - uC with H-V separator to decode the info . Advantage the same as max7456 still generate his own H-V sync picture after camera fault .
In both above ways the ground software will difference only by H lines counting after V sync signal , rest can be the same as hardware - so H-V separator and uC with signal on comparator .
As the Third possibility is to use in laptop computer on the ground software for digits recognizing and use this info as telemetry data .
The idea is to use the closed caption or Teletext to control a directional antenna on ground. We only need to transmit around 40 bytes for second (and maybe less), including latitude, longitude, altitude, speed, heading (the last two for antenna dead reckoning) and checksum. Also this information can be "recycled" to display the actual position of the aircraft on google earth and datalog the flight.
Some other requirements:
-Must be an independent and easy to adapt system (optional product).
-Compatible with PAL and NTSC
-Keep the original OSD system on the aircraft (no overlay the video on ground).
Also some other OSD like Dakar, Eagle Three and RangeVideo OSD already have this feature.
From Project 4 (Design the hardware for the ArduPilot Mega ground station)
The GS will only have an XBee, at this stage, and be Rx and Tx. This implies that there must be XBee on board as well. The self contained GS, ie. no PC/Laptop, won't have the hardware to decode Video or rather it may be to expensive to do the video decode.
Its early days yet, so I may be wrong here, the team has not started yet ;) and may decide otherwise :)
Replies
Poke Poke, Bump Bump. I am checking to see where this Project left off. Did it get spun up into something else? I would like to be able to interface with the APM (2.6) and send telemetry data (for use at ground station) over the existing video link. I have been searching and have not found a good solution as yet. Advice welcomed!
@Remzibi: It wont work without the camera at all... The max need the sync pulses to work and they are generated by the video camera.
Carlito
I´ve been researching about this subject (Closed Captioning) for some time now. The most successfull projects I found were:
http://www.brouhaha.com/~eric/pic/caption/
http://nootropicdesign.com/projectlab/2011/03/20/decoding-closed-ca...
The second project the guy implemented Remzibi first´s option, he created two chars for 0 and 1 bits and a char for the start bit.
At each frame he could transmit 2 Bytes, without errors 60B/s at 30 fps. That sound enought for GPS,mode and attitude data.
The two problems one might find are:
The sync separator:
In the first project the guy used at first LM1881 but he couldn´t split the signal properlly, so he used the EL4581C.
In the second project the guy had the same problem but he changed the resistors values and then it worked.
The bit 0 and 1 voltage level:
In the first project the guy made a comparator using two OP AMPs, that sounded very unreliable.
The second project the guy connected the video signal to a transistor and it worked fine.
If anyone could implement the Max part, Im able to try to decode the signal cause I already ordered the components too and I´m starting to build a PCB to decode the data.
Carlito
Would It make things simpler because you don't have to worry about PAL/NTSC stuff?
That said, remzibi's solution would be better because it means less overall hardware.
Also, you could ask some of the developers of the DakarOSD to get on board, i cant recall if their code is open source but i know their hardware is.
Cheers,
Nick
First is to use existing ability of OSD of MAX7456 and define two symbols(fonts) in eeprom - representing 0 and 1 (ex, white and black square) , or maybe 16 graphics symbols representing 0,1,2,...D,E,F .
Then use a first(or last) line(usually covered by frame border of monitors or gogles) of OSD overlay to transmit any info .
That way can be used by OSD by his self or using Ardupilot and UART "$Mxxxxxxxx" message .
On the ground will be uC able to decode this info - the advantage is that if camera even will stop to work - OSD will still generate his picture without any transmition problems .
Second way is harder (need hardware modyfication ) - using H V outputs of MAX7456 chip - so then we able to use invisible lines on H sync to the same purpose (but only in time when they are invisible, rest will control chip by his normal way) , on ground the same - uC with H-V separator to decode the info . Advantage the same as max7456 still generate his own H-V sync picture after camera fault .
In both above ways the ground software will difference only by H lines counting after V sync signal , rest can be the same as hardware - so H-V separator and uC with signal on comparator .
As the Third possibility is to use in laptop computer on the ground software for digits recognizing and use this info as telemetry data .
Some other requirements:
-Must be an independent and easy to adapt system (optional product).
-Compatible with PAL and NTSC
-Keep the original OSD system on the aircraft (no overlay the video on ground).
Also some other OSD like Dakar, Eagle Three and RangeVideo OSD already have this feature.
Regards!
From Project 4 (Design the hardware for the ArduPilot Mega ground station)
The GS will only have an XBee, at this stage, and be Rx and Tx. This implies that there must be XBee on board as well. The self contained GS, ie. no PC/Laptop, won't have the hardware to decode Video or rather it may be to expensive to do the video decode.
Its early days yet, so I may be wrong here, the team has not started yet ;) and may decide otherwise :)
Rgrds
Sarel Wagner