Combined with Intel’s Existing Assets, Movidius Technology – for New Devices Like Drones, Robots, Virtual Reality Headsets and More – Positions Intel to Lead in Providing Computer Vision and Deep Learning Solutions from the Device to the Cloud

Computer vision is a critical technology for smart, connected devices of the future. (Credit: Intel Corporation)


With the introduction of RealSense™ depth-sensing cameras, Intel brought groundbreaking technology that allowed devices to “see” the world in three dimensions. To amplify this paradigm shift, they completed several acquisitions in machine learning, deep learning andcognitive computing to build a suite of capabilities that open an entirely new world of possibilities: from recognizing objects, to understanding scenes; from authentication to tracking and navigating. This said, as devices become smarter and more distributed specific System on a Chip (SoC) attributes will be paramount to giving human-like sight to the 50 billion connected devices that are projected by 2020.

With Movidius, Intel gains low-power, high-performance SoC platforms for accelerating computer vision applications. Additionally, this acquisition brings algorithms tuned for deep learning, depth processing, navigation and mapping, and natural interactions, as well as broad expertise in embedded computer vision and machine intelligence. Movidius’ technology optimizes, enhances and brings RealSense™ capabilities to fruition.

We see massive potential for Movidius to accelerate our initiatives in new and emerging technologies. The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond. Movidius’ market-leading family of computer vision SoCs complements Intel’s RealSense™ offerings in addition to our broader IP and product roadmap.

Computer vision will trigger a Cambrian explosion of compute, with Intel at the forefront of this new wave of computing, enabled by RealSense™ in conjunction with Movidius and our full suite of perceptual computing technologies.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones


  • Yeah Hugues,

    I must admit that tthis slide might look like Intel's version of the Gartner Hype Cycle, but we must admit that they did quite an impressive acquisition streak since one year : Ascending - Yuneec - Altera - and now this little known company with great products.

    As for neural network, the GPU really fired the development since the last years, progress is now exponential and it really seems that we are shifting into this new paradigm. Intel does not want to miss the train this time, I guess there was a lesson learned with the smartphones....

  • MR60

    let's hope this is not yet again a big corporate big bang project that will never deliver anything concrete. Just add the word "big data" to the top slide of this post and you have the caricaturally pathetic and laughable IT-marketing gribberish. Ok, enough sarcasms, let's give it a chance (but be patient, neural networks are trying hard to learn something for the last 25 years)

  • I wouldnt be surprised if Intel incorporates this IP into their furture IoT SoC offering!

  • Good to know Patrick,

    Looks like Intel really is at the front of this, I look forward to seeing their development systems incorporating Movidius technology.



  • Gary, the Fathom chip from Movidius  -ASIC core build expressly for neural network- can process Tensorflow at the same speed as a TX1 under 1 watt of power compared to 15 watts, so for me it is a big WOW

    We are really getting fast into the era of Artificial Intelligence, and for their portfolio on vision system and autonomous fly, Intel really put their hands on a gem !!

  • I don't know if you've noticed, but RealSense has moved beyond it's initial Kinect capabilities in, for us, a very important way.

    The Initial Kinects (and Real Sense sensors) were restricted to a maximum distance defined by their holographic twisted line segment overlay.

    Generally this was pretty short (less than 15 feet, sometimes less than 6 feet).

    However, one of their more recent ones works fine beyond this limit and using natural illumination not depending on the built in IR illumination source.

    It undoubtedly operates with reduced accuracy under these circumstances, but it greatly expands the range and permits them to be used outdoors under varied lighting conditions.

    Since navigation depends more on object detection and location than it does on absolute accuracy, this should be a real boon for our use.

    An undeclared capability might be user supplied high intensity IR secondary illumination where that might prove useful.

    Interestingly the RealSense sensor with this capability is not their most recent one but their older cheaper one.

    Definitely worth looking into the RealSense Robotics or Drone controller kits.

    And clearly Intel is very serious about this.

    Too bad they can't collaborate directly with Nvidia, these applications really need their Multi GPU architecture.

    Best Regards,


  • By the way, this is the intelligence behind the new  DJI Phantom4 visual collision avoidance system...You can bet that Yuneec will get the same feature pretty soon ;-)

This reply was deleted.