Take a look under the hood

ADAS , will this be the next generation of autopilot ?

 

ADAS.jpg

 

The next step toward fully autonomous  UAV systems is based on acquiring and processing in real-time, information from visual (and/or other spectrums) surrounding space around the vehicle, and  sending  steering commands to the flight control unit so that navigating across an obstructed and ever changing environment without any external intervention might be feasible.

 

Real-time performance of embedded vision system is  a real challenge, as there is no single hardware architecture that meets perfectly the requirement of each processing level. We can categorise the different processing levels in computer vision as; Low-level processing that is characterized by Millions of repetitive operations on pixels on every seconds; Intermediate-level, is focused on certain regions of interest that meet particular classification of Thousands of objects on every seconds;  and the High-Level on which  object recognition , sensor fusion, decision making and application control are processed at a rate of hundred of operations per second.

 

At the current state of development, new generations of Systems On Chip - SoC-  that are tightly integrating Arm processors with Programmable Logic are particularly well suited to meet these requirements (1)(2). Multiples camera signals can be processed in parallel and with a very low latency within the programmable logic fabric of these Soc and the Intermediate and High level can be share between the FPGA, DSP, LUT and Arm processors within the chip. High level libraries and tools -like OpenCV- can be synthesized in programmable logic to build a customized accelerator that can be implemented with both higher performance and at much lower power consumption than a similar GPU-CPU model. Other technologies for this type of processing are available.  ASICs are highly integrated chips designed and built for a specific application. They offer high performance and low power consumption but manufacturing cost makes them unaffordable for low volume application. DSPs are very attractive for embedded vision systems, their capacity to do single cycle multiply and accumulation operations, in addition to parallel processing capabilities and integrated memory blocks.

 

A new generation of DSP, engineered specifically  for Advanced Driver Assistance Systems - ADAS in the automotive industry are now available. These families of SoC incorporate a heterogeneous, scalable architecture that includes a mix of DSP cores, vision accelerators, ARM Cortex-A15 MPCore and dual-Cortex-M4 processors. Texas Instrument and Toshiba have just released their own set of SoCs dedicated to this emerging market(3)(4). As an example, TI TDA2x SoC incorporates a heterogeneous, scalable architecture that includes a mix of fixed and floating-point TMS320C66x digital signal processor (DSP) generation cores, Vision Acceleration Pac, ARM Cortex-A15 MPCore and dual-Cortex-M4 processors. The integration of a video accelerator for decoding multiple video streams over an Ethernet AVB network, along with graphics accelerators for rendering virtual views, enable a 3D viewing experience. And the TDA2x SoC also integrates a host of peripherals including multi-camera interfaces (both parallel and serial) for LVDS-based surround view systems, displays, CAN and Gig Ethernet.

 

The TI Vision Software Development ToolKit  is complete with more than 200 optimized functions for both Embedded Visual Engine and DSP libraries, providing developers with the building blocks to jump start development and reduce the time to market. Additionally, both libraries are available for low-to-mid- and high-level vision processing. Integral image, gradient, morphological operation and histograms are examples of low-level image processing functionalities; HOG, rBRIEF, ORB, Harris and optical flow are key functions as a mid-level and Kalman filtering  and Adaboost are high-level processing functions.

 

Future development on these platforms can be implemented with OpenVX framework(5). OpenVX is an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time uses cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more.

 

So, next time you’ll search for an advanced autopilot to control your UAV, take a look under the hood.



References:

(1)http://www.xilinx.com/products/silicon-devices/soc/zynq-7000/silicon-devices.html

(2)https://www.altera.com/products/soc/portfolio/cyclone-v-soc/overview.html

(3) http://www.ti.com/lit/wp/spry260/spry260.pdf

(4)http://toshiba.semicon-storage.com/ap-en/application/automotive/safety-assist/image-recognition.html

(5) https://www.khronos.org/openvx/

 

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • ================UPDATE========== 

    Nvidia demonstrated DRIVE PX2 platform for self-driving cars at CES 2016, but did not give many details about the SoC used in the board. Today, the company has finally provided more information about Parker hexa-core SoC combining two Denver 2 cores, and four Cortex A57 cores combining with a 256-core Pascal GPU.

    Nvidia_Parker_Block_DiagramNvidia Parker SoC specifications:

    • CPU – 2x Denver 2 ARMv8 cores, and 4x ARM Cortex A57 cores with 2MB + 2 MB L2 cache, coherent HMP architecture (meaning all 6 cores can work at the same time)
    • GPUs – Nvidia Pascal Geforce GPU with 256 CUDA cores supporting DirectX 12, OpenGL 4.5, Nvidia CUDA 8.0, OpenGL ES 3.1, AEP, and Vulkan + 2D graphics engine
    • Memory – 128-bit LPDDR4 with ECC
    • Display – Triple display pipeline, each at up to 4K 60fps.
    • VPU – 4K60 H.265 and VP9 hardware video decoder and encoder
    • Others:
      • Gigabit Ethernet MAC
      • Dual-CAN (controller area network)
      • Audio engine
      • Security & safety engines including a dual-lockstep processor for reliable fault detection and processing
      • Image processor
    • ISO 26262 functional safety standard for electrical and electronic (E/E) systems compliance
    • Process – 16nm FinFet
    PX Drive 2 Board with two Parker SoCs

    PX Drive 2 Board with two Parker SoCs

    Parker is said to deliver up to 1.5 teraflops (native FP16 processing) of performance for “deep learning-based self-driving AI cockpit systems”.

    This type of board and processor is normally only available to car and part manufacturer, and the company claims than 80 carmakers, tier 1 suppliers and university research centers are now using DRIVE PX 2 systems to develop autonomous vehicles. That means the platform should find its way into cars, trucks and buses soon, including in some 100 Volvo XC90 SUVs part of an autonomous-car pilot program in Sweden slated to start next year.

  • Intresting post.
    I think creators should stick with other means rather then SoC parts intended for the auto industry.

    2 reasons why I would think so:


    1. For an aviation product - all of its parts, including OEM such as Soc`s, must meet the aviation standards which are quite different from the auto industry standards.

    2. Using the the ADAS parts and modifying them for the aviation industry, might reduce the interest of the semiconductor manufacturers to provide the UAS industry with a unique family of SoC`s.

  • Just an add-on to this BLOG

    After reading this: http://international.download.nvidia.com/pdf/tegra/Tegra-X1-whitepa...

    I must admit that the new generation of GPU are fully qualified to compete.

    3702122942?profile=original

    The DRIVE PX platform supports up to twelve camera inputs, and each Tegra X1 processor on the platform can access the data from the twelve onboard cameras and process it in real-time. Assuming each of these twelve cameras is a 1 Megapixel (1280x800) HD camera outputting at 30 fps, DRIVE PX will have to process 360 Mega-pixels per second of total video data. Since DRIVE PX has the ability to process 1.3 Gigapixels per second, it is capable of handling even higher resolution cameras outputting at higher frame rates. For computer vision-based applications, having higher resolution camera data at higher frame rates allows for faster and more accurate detection of objects in the incoming video streams.

    http://international.download.nvidia.com/pdf/tegra/Tegra-X1-whitepaper-v1.0.pdf
  • Hello Tony,

    There is a lot of development on vision systems for UAS .A lot of very interesting development is accomplished by the Computer Vision and Geometry Lab of ETH Zurich, whose part of their work made the PX4 and the optical Flow some amazing  pieces of engineering. They have recently took part of this  video that demonstrates full autonomous avoidance. https://www.youtube.com/watch?v=_qah8oIzCwk

    When you look at the ADAS imaging system on a conceptual perspective,most of the components and algorithms are reusable for 3d type of navigation. The image acquisition and low-level pixel processing are exactly the same. At the intermediate-level, most of the  availables functions like ; HOG, rBRIEF, ORB, Harris and optical flow can be reused assuming that some additionnal computation has to be performed taking in account the  yaw and roll induced errors.

    At the High-Level this is oubviously a totaly different type of system on all aspects. The good thing at this level is the fact that there are a considerable amount of tools et systems available, considering that this part is the realm of C , C++ , Python, JAVA and existing packages like Mission Planner.

     

  • Cool stuff. 

    Has any work been done to develop vision libraries tailored to the UAS application?  Obviously the vast work being done by the talented automotive engineers is not going to be useful to an autonomous aircraft.

    What industrial UAS application/arena do you envision the penetration market for the SoC technology?

This reply was deleted.