Combined with Intel’s Existing Assets, Movidius Technology – for New Devices Like Drones, Robots, Virtual Reality Headsets and More – Positions Intel to Lead in Providing Computer Vision and Deep Learning Solutions from the Device to the Cloud
With the introduction of RealSense™ depth-sensing cameras, Intel brought groundbreaking technology that allowed devices to “see” the world in three dimensions. To amplify this paradigm shift, they completed several acquisitions in machine learning, deep learning andcognitive computing to build a suite of capabilities that open an entirely new world of possibilities: from recognizing objects, to understanding scenes; from authentication to tracking and navigating. This said, as devices become smarter and more distributed specific System on a Chip (SoC) attributes will be paramount to giving human-like sight to the 50 billion connected devices that are projected by 2020.
With Movidius, Intel gains low-power, high-performance SoC platforms for accelerating computer vision applications. Additionally, this acquisition brings algorithms tuned for deep learning, depth processing, navigation and mapping, and natural interactions, as well as broad expertise in embedded computer vision and machine intelligence. Movidius’ technology optimizes, enhances and brings RealSense™ capabilities to fruition.
We see massive potential for Movidius to accelerate our initiatives in new and emerging technologies. The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond. Movidius’ market-leading family of computer vision SoCs complements Intel’s RealSense™ offerings in addition to our broader IP and product roadmap.
Computer vision will trigger a Cambrian explosion of compute, with Intel at the forefront of this new wave of computing, enabled by RealSense™ in conjunction with Movidius and our full suite of perceptual computing technologies.
You think "racing with no hands on sticks" and I think "can now effectively build something!".......it would be amazing to watch racing drones learn and behave, but I guess it will take some of the fun out of it. But maybe we can allow a little bit of human input along with the VR glasses so then we have something to sell (I imagine at the state fair where anyone could, for $5, pretend they are a champion)....
We'll all said and thought this in the past - but this time we REALLY have reached the tipping point in terms of computing. I remember reading Byte and Wired decades ago and realizing stuff like:
1. At the time they said the amount of info needed to fully teleport a human arm would take existing hard drives stacked all the way to the moon.
2. That VR was nearly impossible because just opening your eyes took in gigabytes of data.
Well, now, we are there now.....except the teleporting. That will take a few more millennia.
When the internet first got popular the metric which amazed me was how quickly the sum total of knowledge was doubling (due to communication, mostly!). It went from thousands of years to just a few years. I don't know if anyone is tracking it now, but it will likely be pushed hard by the AI and the real world implementation of self-driving cars and other robotics.
In the end it's not just the tech that is exciting. It's the idea that millions of people will not have to suffer car injuries and death due to the same tech....and that's just the start. After those problems are solved, medicine and other fields will eventually hit tipping points.
So, when are we gonna upgrade our educational system?
@Craig you absolutely right, the business model is shifting from silicon and IP integration to sophisticated pretrained datasets that are fine tuned for specific mission/behavior. On top of that, the tools will get more optimal in reducing the training time and required resources (memory & CPU/GPU/NNU)
What is really amazing is that we technically do not have to rewrite the whole process for a different mission. For example, it is plausible to use the existing Forest Trail Mobile Robot and retrain it for FPV racing. By feeding the same Neural Network with different sequences of onboard video and the associated RC input (mavlink) we can get, after a couple of hours of training on a GPU based system,an efficient FPV racer robot that can be upoaded on quad racer companion computer and go racing ... no hands on sticks !!
This is somewhat of an educated guess - but I assume that the performance of these systems on aerial robots is very closely tied to other factors than the Movidius or Realsense basic hardware.
The point is that X drone maker buying a full chipset from Intel may not have a system that works well - without the IP and feedback (simulations also?) of thousands of hours of flight in all conditions.
I think that is why DJI first released the developer Matris 100 with guidance:
and then many months after released the P4. I spoke to some DJI guys at CES and they said DJI was amazed at what developers were doing with the 100. So they are, in effect, doing both machine and human learning in order to hone the visual sensing.
As a non-engineer I don't know the specifics, but my point is that the SOC may save developers and manufacturers amazing amounts of time and money, but will not provide a solution without proper programming and integration. In the end it's the maker who determines - for example - how quickly a machine should stop when it "sees" something, etc. etc
I recommend that you read the associated paper: http://rpg.ifi.uzh.ch/docs/RAL16_Giusti.pdf
Here is what is written:
TABLE I: Results for the three-class (Left- Straight- Right) problem.
DNN Saliency Human1 Human2
85.2% 52.3% 86.5% 82.0%
Two human observers, each of which is asked to classify 200 randomly sampled images from the testing set in one of the three classes.We observe that DNN methods perform comparably to humans, meaning that they manage to extract much of the information available in the inputs.
Note: In case of failure , the drone simply stops.
@Olivier, thx for the youtube link. However I have to correct a misleading point made in this video : neural networks DO NOT recognize better the pathway than humans. What neural networks do better than human is to classify the images showing a partial small piece of the path. A human works on the basis of his vision of the whole path and oviously knows where the path is in 100% of the cases.
So neural networks still have to improve to get from the 85% success rate of recognition to 100%, otherwise drones will crash into obstacles in 15% of the cases which is unacceptable still.
Well @Paul this is on my Santa's wish list as of now, :-)
Hope that they wont screw-up my delivery with their new traveling salesman training dataset....
@Patrick, are you the lucky recipient of one of the 1000 Fathom USB ? (you are mentioning Christmas)
I think you might be right Patrick, now it's getting interesting.
Intel have updated the picture, now we can see eRemi El-Ouazzane from Movidius and Josh Walden from Intel holding the Intel Aero Compute Board on one hand and the Fathom USB accelerator in the other, with the new Intel Aero Drone on the table.
Well Gary,I think we may have found our next Christmas presents !!!
Hughes, you may be a bit harsh on neural networks ;) How about this for recognition?
(ICCV15, Kaiming He et. al.) Some of the latest convolutional neural networks are sometimes better than humans at some types of visual recognition ...
And closer to home, a drone that learned to navigate forest trails autonomously with a neural network.
Yes, still in it's infancy, but pretty impressive imho. Now with chips like Fathom ...