It has been a typical scenario for drones to have an onboard camera and a key requirement to transmit live camera feed to ground devices. As a next step we also want the drones to acquire the images into the onboard processing framework and apply some computer vision algorithms or even simple image processing to enhance the raw image. And then we would like the processed image stream to be sent to ground devices as well. Once we have the video stream in our laptop or mobile, why not interact with it and send the inputs back to the onboard computer? Welcome to FlytOS!
FlytOS is ROS based framework and ROS enables a cleaner modularized way to process and share data. FlytOS image pipeline is as follows:
The basic idea is to get all image data in the ROS pub-sub bus. Then an image processing module can subscribe to raw images and publish back processed images. A web video server can subscribe to any of the available image topics and stream it to ground devices on web as needed.
The images are captured from USB camera attached to FlytPOD or ODROID and are published on ROS using a cam driver. The ROS based web video server subscribes to an image topic and provides mjpeg stream over http. This stream can then be viewed in most modern browsers without any plugin or client software. FlytOS provides options through its web APIs to list available stream topics and start/stop a particular stream. You can also get a snapshot on demand. Further, you can interact with the image stream e.g. select a region of interest and send that information back to the onboard computer. We have tested this setup and it works well as can be seen in the demo video above.
However, there are several constraints when we think of drone video streaming. We should be able to
Capture the images from camera into the onboard processing framework like FlytOS for image processing (as against having camera as payload and transmitting images directly to ground)
Transmit raw as well as processed images to ground
Get a low latency/lag real time stream as stale data may not be helpful or sometimes even harmful
Have relatively low power/processing requirement to be suitable for onboard computer
Have relatively low bandwidth requirement
Get good quality image/video
Have adaptive streaming based on network bandwidth
The current setup in FlytOS serves the first two items well and reasonably tackles the next three points. It is not yet focused on HD quality and does not have adaptive streaming. Additionally, we wanted to stream the video to a web browser without the need for a client software or plugin for easy integration with web apps and cross platform accessibility. A good alternative web oriented implementation with real time focus could be WebRTC however it does not really support ARM Linux yet.
In any case, having multiple parallel streams being served directly from the drone to several ground devices is probably not an ideal scenario as it would put strain on the onboard computer and its network link. Alternate setup is to stream the images from the drone to a single computer on ground. This ground device would be a distribution server (preferable in cloud) that can then stream parallely to multiple client devices and can also be a WebRTC setup. FlytOS will continue to improve upon its solution and such a streaming architecture could be the next step.
Meanwhile, you can get started with FlytOS right away, test video streaming, try out sample apps and we would like you to give us your feedback and discuss your views on the next steps.
Note: The demo video above shows sample of Object Detection and Tracking. This is accomplished using an Object Tracking module in FlytOS. We would talk more about what is available and how it works in the next article. Stay tuned!.
Download FlytOS: http://flytbase.com/flytos/#download (get updated .deb if already downloaded)
Video Streaming API reference: http://docs.flytbase.com/docs/FlytAPI/REST_WebsocketAPIs.html#video-streaming-apis
FlytPOD is live on Indiegogo. You can Preorder it now :https://www.indiegogo.com/projects/flytpod-advanced-flight-computer...
@Pradeep - Have you looked at VIP-ST board from Tonbo Imaging? It is a very small form factor board ideal for UAVs and has built-in support for real-time Video Stabilization, Object Tracking, MJPEG/H.264 streaming. You can reach them at email@example.com
Thanks for the comment and you are right, but let me add a little context. This is more focused on computer vision aspects and idea is to be able to stream any (processed) image topic in ROS to ground devices vs raw camera feed. The image stream in this case should be good enough for monitoring and interacting with the onboard image processing module. Secondly, we wanted to be able to stream directly to web for accessibility on any ground device. MJPEG is still relatively universal option in this sense and can work without need for any dedicated hardware or software for encoding/decoding. This scheme could be augmented with a parallel direct cam feed (may be with gstreamer h264 if not in h/w). An overall video streaming solution can thus be multi-fold addressing constraints listed in the article while current ROS web video server based approach works well to get started with vision algorithms. In light of this, any inputs for bettering the approach are welcome.
Strange there is no comments on this, as it is very relevant.
That being said, reading about MJPEG for video streaming gave me some bad flashbacks to the 90's. ANd does not make much sense since pretty much all SOC computers today will have dedicated h264 video hardware for encoding and decoding.