Project Tango is Google's plan to bring full SLAM (optical mapping and location detection) into smartphones.
We can build it faster together.
As we walk through our daily lives, we use visual cues to navigate and understand the world around us. We observe the size and shape of objects and rooms, and we learn their position and layout almost effortlessly over time. This awareness of space and motion is fundamental to the way we interact with our environment and each other. We are physical beings that live in a 3D world. Yet, our mobile devices assume that physical world ends at the boundaries of the screen.
The goal of Project Tango is to give mobile devices a human-scale understanding of space and motion.
Over the past year, our team has been working with universities, research labs, and industrial partners spanning nine countries around the world to harvest research from the last decade of work in robotics and computer vision, concentrating that technology into a unique mobile phone. Now, we’re ready to put early prototypes into the hands of developers that can imagine the possibilities and help bring those ideas into reality.
We hope you will take this journey with us. We believe it will be one worth traveling.
- Johnny Lee and the ATAP-Project Tango Team
Comments
More than 2 years later, this project is getting closer to reality.....
http://www.anandtech.com/show/10522/qualcomm-at-siggraph-2016-proje...
See this
https://www.youtube.com/watch?v=JyG1EeqCmHY
and
https://vimeo.com/38764177
Miniaturization with computation power is the key to upcoming gadgets that will astonish us everytime they are introduced in the market :).
Earlier we did SLAM on the hexacopter with ardu-pilot and kinect based payload system with on-board processing (the overall payload weights was around 0.8kg with batteries)..... the bulkiness of the payload had made us to work with hexa-copter due to its weight-carrying capability (rather than small scale quad-copter) . Here are some of the payload pictures
With these new sort of gadgets (tango or mobile based monocular SLAM etc), the on-board real-time indoor and outdoor navigation is the near future reality :) (esp on small-scale custom aerial platform and major part of battery power not consumed on carrying the payload)
Cheers
Gary, lol, I feel your pain.
I was planning to try a simple "point cloud" program with the $99 LRF that I got from lightware, but I fear that in the time it takes to get it working (a few days) their will be a $5.00 stereoscopic navigation AI on a chip. I guess that would be a good problem to have.
I do hope that a couple of members here get one of the 200 google kits and keep us posted! That would be awesome.
Cool packaging of SLAM+phone.
I wonder how i compares with Qualcomm's approach: (http://wwws.qualcomm.com/media/documents/andrew-davison-presentation)
Though, I'm looking at my Z1S AR camera app and it already does a poor man's SLAM for proper trajectories of the animations, proper edge detection for static icons and use of the phone's IMU. Only problem is the holy grail of vision navigation: proper collision avoidance algos.... which isn't there yet.
ETH Zurich is a key partner. Lorenz Meier (Pixhawk and Mavlink architect) has been working on 3D Slam mapping for phones for some time. Is he part of this ?
I am in no way jealous no not one little bit.
As you'll see from the video, we at 3D Robotics have been working with them on this, and IRIS has been one of their research platforms to explore SLAM from the air. Very cool stuff.
Great project! Navigation by vision (supported by gyro odometry) is an essential missing link in robotics. For indoors anyway. For outdoors when GPS doesn't work or isn't precise or fast enough. This definitely is the future of robot navigation. I would fuse the Tango project with the Bublcam project to make it even more powerful!
It's taken them a while. I want two now! In fact yesterday. In flight the more the merrier for rapid acquisition. There was I think flying a Kinect would be a plan.