From TechCrunch. It works on Raspberry Pi!
Following on the heels of their announcement a few weeks ago about their FLIR partnership, Movidius is making another pretty significant announcement regarding their Myriad 2 processor. They’ve incorporated it into a new USB device called the Fathom Neural Compute Stick.
You can plug the Fathom into any USB-capable device (computer, camera, GoPro, Raspberry Pi, Arduino, etc) and that device can become “smarter” in the sense that it can utilize the Myriad 2 processor inside of it to become an input for a neural network (I’ll come back to all this).
Essentially, it means a device with the Fathom plugged into it can react cognitively or intelligently, based on the things it sees with its camera (via computer vision) or data it processes from another source. A device using it can make its own decisions depending on its programming. The key point is it can do this all natively—right on the stick. No call to the cloud is necessary.
In addition to the stick, Movidius has also created a software system they are calling the Fathom Deep Learning Software Framework that lets you optimize and compile learning algorithms into a binary that will run on the Myriad 2 at extremely low power. In a computer vision scenario, Movidius claims it can process 16 images per second using a single watt of power at full-bore/peak performance. There are many other cognitive scenarios it can be used for though.
They have 1000 available for free to qualified customers, researchers and small companies that they’ll be making available in the coming weeks. A larger rollout is planned for Q4 that is targeting the sub $100 range for the device. So that’s the news.
The Complicated Part: What’s All This Business About Neural Networks And Algorithms?
Still, I wanted to understand how this device is used, in practical terms…to visualize where the Fathom and its software framework fit in with neural networks in an actual deployment. After struggling to grasp it for a bit (and after a few phone calls with Movidius) I finally came up with the following greatly simplified analogy.
Say you want to teach a computer system to recognize images or parts of images and react to them very quickly. For example, you want to program a drone camera to be able to recognize landing surfaces that are flat and solid versus those that are unstable.
To do this, you might build a computer system, with many, many GPUs and then use an open source software library like TensorFlow on that system to make the computer a learning system—an Artificial Neural Network. Once you have this system in place, you might begin feeding tens or even hundreds of thousands of images of acceptable landing surfaces into that learning system: flat surfaces, ship decks, driveways, mountaintops…anywhere a drone might need to land.
Over time, this large computer system begins learning and creating an algorithm to where it can begin to anticipate answers on it own, very quickly. But accessing this system from remote devices requires internet connectivity and there is some delay for a client/server transfer of information. In a situation like landing a drone, a couple of seconds could be critical.
How the Fathom Neural Compute Stick figures into this is that the algorithmic computing power of the learning system can be optimized and output (using the Fathom software framework) into a binary that can run on the Fathom stick itself. In this way, any device that the Fathom is plugged into can have instant access to complete neural network because a version of that network is running locally on the Fathom and thus the device.
So in the previous drone example, instead of waiting for cloud calls to get landing site decisioning information, the drone could just make those decisions itself based on what it’s camera is seeing in real-time and with extremely low power consumption.
That’s sorta badass.
The Bigger Picture
If you stretch your mind a bit, you can begin to see other practical applications of miniature, low-power, cognitively capable hardware like this: intelligent flight, security cameras with situational awareness, smaller autonomous vehicles, new levels of speech recognition.
Those same size and power factors also make wearables and interactive eyewear excellent targets for use (albeit more likely in a directly integrated way rather than USB add-on). This is notable as Augmented and Mixed Reality capabilities continue to make headlines and get closer to the comfort zones of the general public.
And since Computer vision (CV) algorithms are one of the backbones that enable AR/MR to have practical uses, making CV function more powerfully and cognitively in a small footprint and at low power has possibly never been as important. I can see this kind of hardware fitting in to that possible future.
Strategically, this approach gives Movidius another way to reach customers. Obviously, they already have integrated hardware agreements with larger companies they are partnered with like Google and FLIR, but for smaller businesses that still may need onboard intelligence for their projects, releasing the Fathom as a modular, add-on opens a new market for small- or medium-sized businesses.
Comments
Did anyone see WHERE you can apply for a developer unit? This has amazing potential.
Hello Ravi ,
I successfuly fly onboard OpenCV Object Tracking with an ODROID XU4 connected to Flight Controller using serial mavlink commands driven by Dronekit Python.
can any one suggest a low cost computer vision system that connects to a standard camera and can give navigation commands to pixhawk (object tracking).
Theory is nice. Let's see it in action :)
This could be really cool, definitely going to apply for a developer unit.