Combined with Intel’s Existing Assets, Movidius Technology – for New Devices Like Drones, Robots, Virtual Reality Headsets and More – Positions Intel to Lead in Providing Computer Vision and Deep Learning Solutions from the Device to the Cloud
Source:https://newsroom.intel.com/editorials/josh-walden-intel-editorial/
With the introduction of RealSense™ depth-sensing cameras, Intel brought groundbreaking technology that allowed devices to “see” the world in three dimensions. To amplify this paradigm shift, they completed several acquisitions in machine learning, deep learning andcognitive computing to build a suite of capabilities that open an entirely new world of possibilities: from recognizing objects, to understanding scenes; from authentication to tracking and navigating. This said, as devices become smarter and more distributed specific System on a Chip (SoC) attributes will be paramount to giving human-like sight to the 50 billion connected devices that are projected by 2020.
With Movidius, Intel gains low-power, high-performance SoC platforms for accelerating computer vision applications. Additionally, this acquisition brings algorithms tuned for deep learning, depth processing, navigation and mapping, and natural interactions, as well as broad expertise in embedded computer vision and machine intelligence. Movidius’ technology optimizes, enhances and brings RealSense™ capabilities to fruition.
We see massive potential for Movidius to accelerate our initiatives in new and emerging technologies. The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond. Movidius’ market-leading family of computer vision SoCs complements Intel’s RealSense™ offerings in addition to our broader IP and product roadmap.
Computer vision will trigger a Cambrian explosion of compute, with Intel at the forefront of this new wave of computing, enabled by RealSense™ in conjunction with Movidius and our full suite of perceptual computing technologies.
Comments
UPDATE:
http://www.movidius.com/news/vion-unveils-tarsier-machine-intellige...
Vion Unveils Tarsier Machine Intelligence Module, Powered by Movidius Myriad 2 Processor
Tarsier module uses USB interface to obtain data, and return the calculation results, which greatly reduces the module and the existing hardware system connection difficulty. As long as a USB interface is present the Tarsier module can be integrated into a host system in order to accelerate machine intelligence and AI applications. The Tarsier module provides the highest USB 3.0 data transfer rate, which is sufficient a wide range of scenarios.
@Olivier ,
what I mean is money wise, looking here: https://developer.nvidia.com/cuda-gpus it might be more efficient to invest into a desktop GPU like a NVIDIA GTX960 that has a compute factor of 5.1 for 250$ compared to a TX1 that has the same power but at double the price for the dev kit + the carrier board to mount the TX1 module on the drone (additional 350$ to 600$) wich translate to a 1,00$ price tag for the companion computer alone...additionaly, if these experiment does not work as expected, you can uses the GTX as a gamer card ;-)
Looking on the last robot perception group, it seem that their latest experiments shows that they are doing the big training offline and they keep a lower grade processor (like the Odroid) for realtime convolution on the trained CNN dataset onboard. Add to that we may get a low prices dedicated Neural Network Processor from Intel it might be wise to wait a little.. :-)
Hughes, not sure what you mean by "classify training images". When it comes to performance comparisons with humans, images used are always new images that the net has never seen, while the data sets used to train the net have been human labeled.
Patrick: No experience with TX1, but will certainly blow an Odroid out of the water. As far as training, not sure but wouldn't be surprised if it compared favorably with a powerful desktop with a fast GPU.
Getting back to the autonomous uav, I am still thinking on what should be the best development platform and onboard flying system; upgrading my Odroid to a TX1,or install a GPU on my desktop for training?
@Olivier & Patrick about human recognition rate of 85% vs neural net.
This stat is about asking humans to classify training images. This has Nothing to do with the human capability of recognizing what he sees in real life. When you walk in the street do you recognize your street only 85% of the time ? If so I suggest you co rapidly do a brain scan for damages...
So it is misleading to say neural net recognize objects better than humans. But it is correct to say neural net classify features of images showing partial détails of a scene better than humans.
When it comes to autonomous driving, neural nets are starting to show great promises. E.g.
Nvidia: End to End Learning for Self-Driving Cars,
Dave-2 Driving a Lincoln
Related: Object Detection in the wild by faster R_CNN + ResNet-101
Next, with a Movidius/Intel Fathom descendant, miniaturized, aboard a drone?
@Hughes I was referring to neural nets in general when mentioning that some can sometimes be better than humans at certain types of recognition. And they get better every year, e.g. ImageNet competition.
That said and as Patrick pointed out, the trail recognizing one does pretty well. And that's a very simple and small neural network, trained on a limited data set, and running with limited computing power on a computer aboard a quad. No doubt it can be improved both with a more sophisticated net and better training, although this typically require massive computing power.
Hence the appeal of the Movidius "neural net" chips. Drastic reduction in weight and power consumption while allowing for ever more powerful and accurate neural net implementations. Oh, and those have one serious advantage over humans: They never get distracted! :)
Looks like Parrot is getting into the action with this developers kit
https://techcrunch.com/2016/09/07/parrot-announces-the-s-l-a-m-dunk...
@Craig , you got my vote :-)
Talking about the tipping point, just got this feeling on catching up on a new technological wave, it happened to me before, you guys might remember this ?