ADAS , will this be the next generation of autopilot ?
The next step toward fully autonomous UAV systems is based on acquiring and processing in real-time, information from visual (and/or other spectrums) surrounding space around the vehicle, and sending steering commands to the flight control unit so that navigating across an obstructed and ever changing environment without any external intervention might be feasible.
Real-time performance of embedded vision system is a real challenge, as there is no single hardware architecture that meets perfectly the requirement of each processing level. We can categorise the different processing levels in computer vision as; Low-level processing that is characterized by Millions of repetitive operations on pixels on every seconds; Intermediate-level, is focused on certain regions of interest that meet particular classification of Thousands of objects on every seconds; and the High-Level on which object recognition , sensor fusion, decision making and application control are processed at a rate of hundred of operations per second.
At the current state of development, new generations of Systems On Chip - SoC- that are tightly integrating Arm processors with Programmable Logic are particularly well suited to meet these requirements (1)(2). Multiples camera signals can be processed in parallel and with a very low latency within the programmable logic fabric of these Soc and the Intermediate and High level can be share between the FPGA, DSP, LUT and Arm processors within the chip. High level libraries and tools -like OpenCV- can be synthesized in programmable logic to build a customized accelerator that can be implemented with both higher performance and at much lower power consumption than a similar GPU-CPU model. Other technologies for this type of processing are available. ASICs are highly integrated chips designed and built for a specific application. They offer high performance and low power consumption but manufacturing cost makes them unaffordable for low volume application. DSPs are very attractive for embedded vision systems, their capacity to do single cycle multiply and accumulation operations, in addition to parallel processing capabilities and integrated memory blocks.
A new generation of DSP, engineered specifically for Advanced Driver Assistance Systems - ADAS in the automotive industry are now available. These families of SoC incorporate a heterogeneous, scalable architecture that includes a mix of DSP cores, vision accelerators, ARM Cortex-A15 MPCore and dual-Cortex-M4 processors. Texas Instrument and Toshiba have just released their own set of SoCs dedicated to this emerging market(3)(4). As an example, TI TDA2x SoC incorporates a heterogeneous, scalable architecture that includes a mix of fixed and floating-point TMS320C66x digital signal processor (DSP) generation cores, Vision Acceleration Pac, ARM Cortex-A15 MPCore and dual-Cortex-M4 processors. The integration of a video accelerator for decoding multiple video streams over an Ethernet AVB network, along with graphics accelerators for rendering virtual views, enable a 3D viewing experience. And the TDA2x SoC also integrates a host of peripherals including multi-camera interfaces (both parallel and serial) for LVDS-based surround view systems, displays, CAN and Gig Ethernet.
The TI Vision Software Development ToolKit is complete with more than 200 optimized functions for both Embedded Visual Engine and DSP libraries, providing developers with the building blocks to jump start development and reduce the time to market. Additionally, both libraries are available for low-to-mid- and high-level vision processing. Integral image, gradient, morphological operation and histograms are examples of low-level image processing functionalities; HOG, rBRIEF, ORB, Harris and optical flow are key functions as a mid-level and Kalman filtering and Adaboost are high-level processing functions.
Future development on these platforms can be implemented with OpenVX framework(5). OpenVX is an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time uses cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more.
So, next time you’ll search for an advanced autopilot to control your UAV, take a look under the hood.