Combined with Intel’s Existing Assets, Movidius Technology – for New Devices Like Drones, Robots, Virtual Reality Headsets and More – Positions Intel to Lead in Providing Computer Vision and Deep Learning Solutions from the Device to the Cloud

Computer vision is a critical technology for smart, connected devices of the future. (Credit: Intel Corporation)

Source:https://newsroom.intel.com/editorials/josh-walden-intel-editorial/

With the introduction of RealSense™ depth-sensing cameras, Intel brought groundbreaking technology that allowed devices to “see” the world in three dimensions. To amplify this paradigm shift, they completed several acquisitions in machine learning, deep learning andcognitive computing to build a suite of capabilities that open an entirely new world of possibilities: from recognizing objects, to understanding scenes; from authentication to tracking and navigating. This said, as devices become smarter and more distributed specific System on a Chip (SoC) attributes will be paramount to giving human-like sight to the 50 billion connected devices that are projected by 2020.

With Movidius, Intel gains low-power, high-performance SoC platforms for accelerating computer vision applications. Additionally, this acquisition brings algorithms tuned for deep learning, depth processing, navigation and mapping, and natural interactions, as well as broad expertise in embedded computer vision and machine intelligence. Movidius’ technology optimizes, enhances and brings RealSense™ capabilities to fruition.

We see massive potential for Movidius to accelerate our initiatives in new and emerging technologies. The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond. Movidius’ market-leading family of computer vision SoCs complements Intel’s RealSense™ offerings in addition to our broader IP and product roadmap.

Computer vision will trigger a Cambrian explosion of compute, with Intel at the forefront of this new wave of computing, enabled by RealSense™ in conjunction with Movidius and our full suite of perceptual computing technologies.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • UPDATE:

    http://www.movidius.com/news/vion-unveils-tarsier-machine-intellige...

    Vion Unveils Tarsier Machine Intelligence Module, Powered by Movidius Myriad 2 Processor

    Tarsier module uses USB interface to obtain data, and return the calculation results, which greatly reduces the module and the existing hardware system connection difficulty. As long as a USB interface is present the Tarsier module can be integrated into a host system in order to accelerate machine intelligence and AI applications. The Tarsier module provides the highest USB 3.0 data transfer rate, which is sufficient a wide range of scenarios.

    vion_tarsier_next_to_coin_352_196_85.jpg

  • @Olivier ,

    what I mean is money wise, looking here: https://developer.nvidia.com/cuda-gpus  it might be more efficient to invest into a desktop GPU like a NVIDIA GTX960 that has a compute factor of 5.1 for 250$ compared to a TX1 that has the same power but at double the price for the dev kit + the carrier board to mount the TX1 module on the drone (additional 350$ to 600$) wich translate to a 1,00$ price tag for the companion computer alone...additionaly, if these experiment does not work as expected, you can uses the GTX as a gamer card ;-)

    Looking on the last robot perception group, it seem that their latest experiments shows that they are doing the big training offline and they keep a lower grade processor (like the Odroid) for realtime  convolution on the trained CNN dataset onboard. Add to that we may get a low prices dedicated Neural Network Processor from Intel it might be wise to wait a little.. :-)

    CUDA GPUs
    Recommended GPU for Developers NVIDIA TITAN RTX
  • Hughes, not sure what you mean by "classify training images". When it comes to performance comparisons with humans, images used are always new images that the net has never seen, while the data sets used to train the net have been human labeled. 

    Patrick: No experience with TX1, but will certainly blow an Odroid out of the water. As far as training, not sure but wouldn't be surprised if it compared favorably with a powerful desktop with a fast GPU.

  • @Olivier The Car industry is definitively one of the major player into this new space. ADAS is creating a brand new field of AI and all the money and ressources invested are beneficial for the whole vision based uav. Intel has no great succes history in this market, so far Nvidia got a comfortable lead.

    Getting back to the autonomous uav, I am still thinking on what should be the best development platform and onboard flying system; upgrading my Odroid to a TX1,or install a GPU on my desktop for training?
  • MR60

    @Olivier & Patrick about human recognition rate of 85% vs neural net.

    This stat is about asking humans to classify training images. This has Nothing to do with the human capability of recognizing what he sees in real life. When you walk in the street do you recognize your street only 85% of the time ? If so  I suggest you co rapidly do a brain scan for damages...

    So it is misleading to say neural net recognize objects better than humans. But it is correct to say neural net classify features of images showing partial détails of a scene better than humans.

  • When it comes to autonomous driving, neural nets are starting to show great promises. E.g.

    Nvidia: End to End Learning for Self-Driving Cars,

    Dave-2 Driving a Lincoln

    Related:  Object Detection in the wild by faster R_CNN + ResNet-101

    Next, with a Movidius/Intel  Fathom descendant, miniaturized, aboard a drone?

  • Well Craig , it really looks like they are all trying to seduce the developpers base. Unfortunately their offer is not original, it is like a revamped DJI MANIFOLD with a 2012 ROS SLAM development toolkit... Thanks for your offer, but I am sticking with my PokeDrone Deep Learning Training Kit
  • @Hughes I was referring to neural nets in general when mentioning that some can sometimes be  better than humans at certain types of recognition. And they get better every year, e.g. ImageNet competition.

    That said and as Patrick pointed out, the trail recognizing one does pretty well. And that's a very  simple and small neural network, trained on a limited data set, and running with limited computing power on a computer aboard a quad.  No doubt it can be improved both with a more sophisticated net and better training, although this typically require massive computing power.

    Hence the appeal  of the Movidius  "neural net"  chips. Drastic reduction in weight and power consumption while allowing for ever more powerful and accurate neural net implementations. Oh, and those have one serious advantage over humans: They never get distracted!   :)

  • Looks like Parrot is getting into the action with this developers kit

    https://techcrunch.com/2016/09/07/parrot-announces-the-s-l-a-m-dunk...

    Parrot announces a dev kit that helps drones see and avoid obstacles
    Parrot, the French company that is probably best known for its AR.Drone and Bebop drones, today announced the Parrot S.L.A.M.dunk, a new development…
  • @Craig , you got my vote :-)

    Talking about the tipping point, just got this feeling on catching up on a new technological wave, it happened to me before, you guys might remember this ?

    Popular_Electronics_Cover_Jan_1975.jpg?width=238

This reply was deleted.