3D Robotics


They said it was "pencil-sized" which seems like a bit of an exaggeration, but it sure is smaller than the Kinect. 

From IEEE Spectrum:

PrimeSense's 3D sensor, which is what's inside the Microsoft Kinect, hasrevolutionized vision for very cheap and very expensive robots. That's not what it was supposed to do: it was supposed to help lazy gamers get off their couches and jump around a little bit. PrimeSense is still very focused on marketable consumer applications with the next generation of the 3D sensor, called Capri, but we're more interested in what it'll do for our robots. At CES last week, we got some hands-on time with Capri, and we have some details for you.

Engineers are familiar with the idea of being able to pick two of the following: faster, better, cheaper. PrimeSense has instead gone with much smaller, probably cheaper (although we're not sure), and arguably just a little bit worse when it comes to performance. The overall size of the sensor has shrunk by a factor of 10, which is really the big news here, and otherwise, most of the specs have stayed the same. Here's what you'll get with Capri:

  • Field of View 57.5×45

  • Range 0.8m-3.5m

  • VGA depth map (640×480)

  • USB 2.0 powered

  • Standard off the shelf components

  • OpenNI compliant

PrimeSense told us that their focus was to "maintain" performance while focusing on miniaturization and cost reduction. And performance is nearly the same, with the exception of there being no RGB camera. This isn't great news for robotics, since having an integrated color camera is a nice feature, but we have to remember that we're just piggybacking on the fact that PrimeSense is really trying to get into the mobile market: Capri is small enough that it'll be able to fit into tablets (and eventually smartphones), and for pure 3D sensing, an RGB camera just adds cost and complexity that's unnecessary for the application. It's tough that robotics isn't yet a big enough market to allow us to dictate features for something like Capri, but as long as we can adapt this sort of technology to make robots cheaper and more capable, we'll get there.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones


  • Hi CJG, From what i understand the Capri works just like the Kinect.

    Basically it acquires a rectangular depth array consisting of how far away each (depth pixel) is.

    It transmits that array serially to you via USB.

    It can also send have it sent to you unprocessed as a monochrome light intensity per pixel array.

    For your information, this is not a Time of Flight based device but is a structured light array device.

    The Infrared emitter superposes a rectangular array of infrared light dots onto the environment and then the two monochrome camera chips receives and effectively measures the stereo displacement of each of those dots to determine distance.

    It also uses a special astigmatic lens which distorts the LED light circles into ellipses which are oriented differently according to distance. (Distance = angle of ellipse).

    The Kinect analyzes and combines this information internally to produce a depth image map.

    In normal ambient light it is surprisingly accurate although it is washed out by direct sunlight.

  • Looking at Capri Posts on PrimeSenses web site, the replies indicate that they will only supply the Capri to Companies who can purchase over 100,000 annually meaning it will be totally unavailable to us except if we want to buy some other electronic device and rip it out of it.

    And they are only responding to companies with that kind of purchasing power and that they have absolutely no intention of supporting the hobby or development market at all.

    What a shame that one of the most important and useful products for future robotics and multicopter applications is being produced by a Company with no regard for it whatsoever.

  • According to IEE Capri Engineering Samples 2 - 3 months, "Consumer kits"?? by end of year.

    So good time to start designing with Kinect (which is pretty much identical except for having color camera too.) For robot and multicopter potential use.

    There is already some android porting of Kinect stuff so PX4 is looking like good potential candidate.

  • After looking more at the PX4, it does seem like it might be well suited to integrating the Capri for reference based navigation, object avoidance and path finding (SLAM).

    It should have the necessary raw processing power and memory for this type of task.

  • anyway, worst product name ever

  • It will likely have limitations in full sunlight interfering with the infrared, but should be usable outdoors under not direct sunlight conditions without much problem. This is actually true for the Kinect also.

    That is a problem that can be addressed by kicking up the infrared emission intensity and primarily by narrowing it's optical bandwidth and providing proper filtering, but there has not yet been much emphasis to do so.

    Other than that it's main current limitation for outdoor use is it's maximum range of 3.5 meters.

    Should be OK for ground vehicles and multicopters, fixed wing airplanes, not so much.

  • I assume that this is, like the kinect, limited to indoor use ?

  • Also, CJG I don't think PrimeSense actually has much in the way of development software yet and I think the Nexus association is indicative that they think Google ought to stick a Capri in it.

    I should also mention that one of the main needs is to be able to identify, record and make use of visual "reference points" that can be used for navigation in a 3 dimensional world in relation to them. The Capri alone or with a camera can make this task much easier and is really the key to computationally non intensive navigation in a "challenging" environment. See simultaneous localization and mapping - SLAM.

  • These are the same people who make the Kinect and while the Kinect does a few additional things in relation to overlaying with its built in Color camera I believe what they are saying is that they are simply doing the internal calculations to produce the distance matrix from the TOF information. You get a rectangular matrix of numbers each of which is how far away from you that point in your environment is. On the Kinect at least it is up to you to take that information and extract motion or gestures as changes over time (main use of Kinect).

    But for our use that distance matrix is the most useful information you could want.

    Avoiding obstacles and even finding and identifying paths and openings should be computationally easy and not time consuming.

    Comprehensive mapping and object identification will be far more computationally challenging.

    One of the most important things with one of these is determining what you really want and need to "see" in order to do what is important.

    A lot of tasks are easy, some are not.

    PrimeSense is still being pretty tight with actual operational capability, probably trying to get more of those 100,000 per year purchasers on board, don't blame them, just think they are overlooking the boost they could get from our community.

  • @Gary @Chris ...

    ... in the presentation, the talked about, that some/most calkulations of the new capri ist done in the modul itself. Do you or Chris know more about this?

This reply was deleted.