Today at the Intel Developer Forum, CEO Brian Krzanichannounced both the company's Aero drone development board and a full ready-to-fly drone based on Aer on the company's RealSense sense-and-avoid solution, which is already used on the Yuneec Typhoon H drone. Both of them are using the Dronecode PX4 flight stack.
Both will be available in Q4 2016. The Aero board is $399 and the price for the whole drone has not been set. More details are here.
IDF San Francisco 2016 – Drones Intel Reveals UAV Developments and Availability of New Technologies at IDF Aug. 17, 2016 – Intel Corporation today announced its involvement in the development of multiple best-in-class unmanned aerial vehicles (UAVs), commonly called drones, showcasing how they interact with their environment, solve problems and thrill users by helping them explore and interact with their worlds unlike ever before.
Intel® Aero Platform for UAVs Intel’s® Aero Platform is available today for developers to build their own drones. This purpose-built, UAV developer kit powered by an Intel® Atom™ quad-core processor combines compute, storage, communications and flexible I/O all in a form factor the size of a standard playing card. When matched with the optional Vision Accessory Kit, developers will have tremendous opportunities to launch sophisticated drone applications into the sky. Aero supports several “plug and play” options, including a flight controller with Dronecode PX4 software, Intel® RealSense™ technology for vision, AirMap SDK for airspace services, and will support LTE for communications. The Intel Aero Platform is available for preorder now on click.intel.com – the Intel Aero compute board is $399, the Intel Aero Vision Accessory Kit is $149, and the Intel Aero Enclosure Kit is $69.
A separate Intel Aero Platform Ready-to-Fly Drone will be available in Q4. Yuneec Typhoon H* with Intel RealSense Technology Now publically available, the Yuneec Typhoon H is the most advanced, compact aerial photography and videography platform available, featuring Intel RealSense technology. With an intelligent obstacle navigation system, the drone can see objects and self-navigate around them. The drone has an Intel RealSense camera and an Intel Atom processor while the ground station is also equipped with an Intel Atom processor. The Typhoon H with Intel RealSense technology is available for purchase for $1,899. AscTec Falcon 8* The AscTec Falcon 8 drone went into serial production in 2009 and has since been used globally for professional applications, most recently as an aerial inspection and surveying tool for Airbus*. The patented V-form octocopter is designed for precision and safety with the reliable AscTec HighPerformance GPS and the new control unit AscTec Trinity. It weighs only 2.3 kilograms on takeoff and works with maximum efficiency in the air, on- and offshore, even in challenging conditions.
Intel and Drone Policy Advocacy Intel CEO Brian Krzanich was recently appointed by the Federal Aviation Administration (FAA) to chair the Drone Advisory Council, a committee focused on addressing “integration strategies” regarding drones. In August, Brian addressed The White House Office of Science and Technology Policy, which includes experts in government, academia and industry, to discuss airspace integration, public and commercial uses, and ways to ensure safety, security and privacy in this emerging field. On Tuesday afternoon, Anil Nanduri (Vice President and General Manager, UAV Segment and Perceptual Computing Group at Intel), Earl Lawrence (Director, Unmanned Aircraft Systems Integration Office at the Federal Aviation Administration), Art Pregler (UAS Director at AT&T*), Ronnie Gnecco (Innovation Manager for UAVs at Airbus), and Shan Phillips (USA CEO at Yuneec) discussed how new drone capabilities and regulatory changes present new opportunities for drone developers
Comments
Hi Lucas and Patrick,
Great to have a coherent description of the two methods in use by the R200 series versus the others.
From what I can glean from the article, at it's core the R200 is a simple two camera stereosopic vision system.
The R200 is also the only one generally useful to us as its range is potentially adequate and it doesn't suffer from IR washout.
The newer ones are designed to be gesture or video game response devices in a controlled environment and are really not nearly as generally suitable as drone or robotic vision sensor.
So I will discuss the potentially useful R200 only from this point on (which is also included in Intel's Robotic and Flight control systems as well).
Basically the two cameras are separated by X distance and the difference in offset at which each camera views a given object in a scene is equivalent to it's distance from the cameras which is equal to retinal offset (how our eyes perceive depth and which is a feature of "disparity space" which is really the computer vision equivalent of retinal offset.
Basically the closer an object or edge is to the camera the more separated it will appear to be in the 2 cameras and the further away, the closer together.
This leads to reduced depth accuracy as the object gets further away, because the separation becomes less and less at a reducing rate and it also results in a minimum distance requirement from the camera before the edge / object is not viewed at all in one of the cameras.
This method is computationally intensive, because it requires continuous association of all objects in both cameras such that features can be identified in common.
Obviously to at least some degree Realsense solved the computational requirement because the camera does output depth information for each combined image pixel.
The texture emitting wide area "laser" diode which (to my understanding from previous descriptions) uses a holographic filter which emits progressively tilted short line segments allows more accurate depth information to be extracted.
At least indoors and out of direct sunlight.
Outdoors, in sunlight, the laser cannot overcome IR background and so does not contribute to the accuracy of the depth image.
However, outdoors, the overall depth that can be viewed is increased to as much as 10 meters or more (of course at ever decreasing accuracy as distance increases.)
While I tend overall to favor active TOF imaging as in Laser scanners or the Kinect V2 IR Flash TOF camera, there is a tremendous advantage to the passive stereo approach if you can get it to work.
You do not have to fight the sun to illuminate the area and can simply provide additional IR illumination when an area is under illuminated.
It looks like the Realsense R200 might have serious potential after all.
A quick thought might be that if you wanted to increase indoor or night time range you could simply supply an external (perhaps brightness adjustable) IR light source, It would wipe out the depth accuracy advantage of the fancy built in laser holographic system, but you could get 10 meters range under any circumstances with it and clearly it could be a choice and maybe the two methods could even be interleaved.
Just speculation, I don't know for sure that would work, but I can't think of a reason why not.
I invite comments on what I have said here, I am very interested in useful 3D imaging systems.
Best regards,
Gary
LOL a flying 4k smartphone... this is cute actually :-)
A major limitation : 100 M range
And a funny thing on the spec: Life time 9 mins !!! Live fast & Die young
I think one can actually buy and fly this SnapDragon Drone today:
http://www.banggood.com/Zero-Dobby-Pocket-Selfie-Drone-With-13MP-HD...
Might try one myself. There are a couple other similar ones.
I think Chris mentioned here in late Dec, 2015 that 3DR was going to introduce one - or at least that is how I understood his blog post:
"Qualcomm, a 3DR investor, has released a sneak peek of what it will be showing at CES next week. 3DR will be also displaying in the Qualcomm booth. Draw your own conclusions ;-)"
Hey thanks Lucas for this :-)
Some specs here: https://github.com/IntelRealSense/librealsense/blob/master/doc/came...
Camera Specifications
And lots of interesting stuff to read on the issues as well.
Patrick, on an autopilot or companion computer you will probably not use Windows, so what you are looking for to use Realsense is actually librealsense that's available at https://github.com/IntelRealSense/librealsense
it is typical and disgusting how the big heavy players reuse work done freely by communities to brag false inmovation in the media. Further every engineer on this planet knows an Atom processor is not capable of real time vision processing.
I hope ardupilot can soon create real innovation with sense and avoid sensors on a TX1...
Kabir, There are a lot of great things that PX4 have done to make it easy to intergrate code but the modular approach also makes it very difficult to do the flight control side of things well. Luckily it is easy to make multirotors fly in good conditions and the shortcommings in PX4 flight stack don't show up.
For industrial applications the conditions are rarly nice and that is where the flight control becomes important. Unfortunatly this is not well understood in the virgin drone manufacture circles and development community.
Lots of toys but tools are harder to come by.
Hi Patrick, I very much agree with your "integrate multiple sources of information" comment.
And now we are very much at early days of even beginning to have appropriate sensors to work with, let alone firmware.
Most important is going to be some sort of reliable and adequate 3D vision and right now they are pretty much divided into two camps, TOF Laser scanning and Stereo Camera.
Both have disadvantages: Scanning is slow and, right now at least expensive and stereo camera is computationally hugely expensive, much less accurate and data interpretation and value is highly variable.
Real sense and the TOF Kinect technologies are departures in that both use a specialized camera array and flash whole area illumination.
The primary disadvantage of whole area illumination is providing a bright enough overall flash to overcome back ground IR and to get a good reflection off of not very reflective surfaces.
But both basically produce more or less directly a usable 3D point cloud which is a very versatile structure for extracting desired position, object avoidance, environment and navigation information.
It does seem that the versions of the Realsense sensor supplied with both the Robot and Flight controller systems are designed to work at 3 to 4 meters and beyond, so that it could be a very useful navigation sensor, perhaps supplemented with a longer range laser TOF scanner.
I hope the Kinect One (V2) technology comes back though, it is just plain superior to Realsense, but Microsoft has just so dropped the ball on it. (Not even in stock at Microsoft anymore).
In my opinion, the very best sensor available right now is Lightware's SF40C TOF Laser Scanner.
At $1000.00 it is a huge bargain and lightening fast in comparison with pretty much all other laser scanners even ones costing ten times as much or more.
You still need to scan in an additional axis, but it is very fast and works out to 50 meters.
Really useful for instant obstacle avoidance or for production of full surrounding 3D point clouds.
Thanks Gary for your input, it seems to me that Intel is just try ''not to miss the emerging market'' this time, like they did with the Smartphone.
Real functional and affordable autonomous guidance and obstacle avoidance system are still a few months ahead.
These system have to integrate multiple sources of information: GPS - IMU - Vision - Laser Range Finder - Sonar and need to interface to a multilayered mission planner that can seamlessly communicate on all levels of operation: Autopilot - Avoidance - Local path planning - Global path planning - and Mission planning strategies like: mission completion & avoidance heuristics and advanced recognition like neuronal.
ETH @ Zurich ,The Autonomous Intelligent Systems group @ Bonn and Csail are doing remarkable progress on this field... not to forget the excellent work from Kabir ;-)
Hi All,
I've been interested in the Realsense 3D depth vision technology since it was originally introduced on the original Microsoft XBox Kinect.
I even had it working under the original Microsoft Robotics development platform.
But I also found out about some of it's limitations there.
It uses a IR flash - "grid" and a holographic filter to produce it's depth samples.
It is not a TOF camera.
Under ideal or good conditions, the results can actusaly be pretty good, but under sub-optimal conditions (high ambient IR light or poor IR reflectivity), quality of information retreived falls off rapidly.
The newer Kinect One camera switched to true TOF technology and seems much more robust.
Of course the only problem with that is that Steve Balmer at Microsoft dropped all support for the Robotics Development Platform.
So now we are essentially faced with Intel adopting the old inferior Realsense technology and introducing both robotics and flight controller platforms that support it.
It might still be good enough, but on researching it a bit on the ROS wiki it doesn't look like it has even as much support as the Kinect or Kinect One.
Of course, these products are new from Intel so hopefully it will get better.
BTW I see conflicting information for the 2 different Real sense sensors.
The newer SR300 is much faster but seems to be much shorter range (1.2 meters max).
The older F or R 200's say 1.2 meters or 3 to 4 meters or more maximum range.
Best I can determine is that they actually come in short and long range flavors.
Here is the Intel developer page for Realsense products, it's worth taking a look.
https://software.intel.com/en-us/realsense/home
It does look like Intel is going to finally provide at least some access to development tools for this.
Just wish we could have gotten as much for the Kinect One true TOF technology.
Here is one very interesting integration of a Kinect V2 (same as Kinect One) with Nvidia Jetson TX1:
http://jetsonhacks.com/2016/07/11/ms-kinect-v2-nvidia-jetson-tx1/
These Jetson hacks pages are definitely worth looking at, also have Realsense.
The Nvidia TX1 is exactly the right kind of processor (multi GPU) for handling 3D point cloud data.