Hi all, 

We recently released this video on autonomous exploration and inspection. Have a look!

Summary: 

The robot employs its visual-inertial navigation system in order to localize itself in the environment and simultaneously map it and 3D reconstruct it. Based on the newly proposed "Receding Horizon Next-Best-View Planner" the robot computes the next best step for efficient volumetric-exploration of unknown spaces. This is achieved by predicting a sequence of next-best-views via sampling-based methods and information gain approaches, executing only the first step and then repeating the whole process in a receding horizon fashion. Once full volumetric-exploration has been achieved, the robot starts focusing on the surface reconstruction of objects of interest in the environment.

Paper:

A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, R. Siegwart, "Receding Horizon "Next-Best-View" Planner for 3D Exploration", IEEE International Conference on Robotics and Automation 2016 (ICRA 2016), Stockholm, Sweden 

The paper is accepted and will be presented at ICRA this year. 

Code: 

The code is to be open sourced and will be available once the paper is presented but also before. Send us an e-mail if you have interest already and we will update you once it is out. 

Previous relevant work (but at the problem of optimized inspection while known a geometrical model of the structure):

A. Bircher, K. Alexis, M. Burri, P. Oettershagen, S. Omari, T. Mantel, R. Siegwart, "Structural Inspection Path Planning via Iterative Viewpoint Resampling with Application to Aerial Robotics", IEEE International Conference on Robotics & Automation, May 26-30, 2015 (ICRA 2015), Seattle, Washington, USA .

Code:

https://github.com/ethz-asl/StructuralInspectionPlanner

Hope you like it!

Kostas Alexis

http://www.kostasalexis.com/

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Eyes. Passive, cheap to run, hungry processing but if you are hauling a body through the air you are paying a high premium already. No wonder evolution has converged upon them multiple times. We have the cams, we have the FPGAs, we suck at figuring out efficient post processing. Throwing funny light out is a stop gap (declines sharply with range, small bats are only competitive in the dark, hunting prey from relatively up close) till we get proper processing algorithms. Watching with great interest :-)

    Convergent evolution
    Convergent evolution is the independent evolution of similar features in species of different periods or ephos in time line. Convergent evolution cre…
  • @Andreas, 

    There is a very large body of work on the problem. Few comments on all of those what you say:

    About the VI-Sensor: what you read at Skybotix website is true. So it is not a commercial system that a third-party can buy now. But be sure that the technology is coming. 

    The videos from CVG, ASL and other labs at ETH and other top universities in the field, show what is possible now. You can feel sure that limited distance obstacle avoidance and mapping is coming at low cost based on Time-of-Flight - and structured light camera systems. They are however bound to operate correctly only in limited distances and under limited exposure to rich light sources (such as daylight and not being under a shadow). Camera-based solutions are much more versatile regarding extracting the localization and mapping and operating under different light conditions. Yet, they require a lot of processing power. However, I am optimistic that this is also on its way to the commercial market. The technology is there -at least conceptually- and I think a lot of companies are now doing this effort. 

    I loved the video from MIT too. It's cool!

    I can't say a single answer on what sensor will win the game. So many exist - from cameras, to Kinect-like, to LiDARs. I would say due to capacity, versatility and cost - a camera will always be at the core of any solution for typical conditions. 

  • @Rob

    It is not quite as you say. Yuneec is using direct depth data from the Realsense. Here it is very different with having to do all the calculations based on a camera-IMU system. However, a camera is not limited to a very small distance as Realsense (and other structured light sensors) and is much more robust against light conditions. And not only that but it only avoids the obstacles ahead of it and not exploring and mapping its environment. 

    Having said that, what Intel showed (in collaboration with AscTec) is super cool!!! Actually, amazingly cool especially if one considers that this coming to the commercial market!!!

  • Kosta, I 'm afraid the links to the description of the sensor and spec sheet in the page you have linked to above are dead. Moreover, currently googling for the sensor at the skybotix.com site I get:

    "Thank you for your interest in Skybotix and the VI-Sensor. Due to a strategic refocus we decided to discontinue the VI-Sensor Early Adopter Program and the VI-Sensor itself effective immediately"

    I 'm guessing my suspicion of a time-of-flight depth sensor is wrong and it is a binocular (+IMU) system. This video has come out of the Computer Vision and Geometry Group of ETH so it's not such a stretch of the imagination that it might have something to do with it.

    However, your video is a bit inconclusive. When we see the reconstruction of the drone's vision outdoors (0:32, 2:09, 2:18), it doesn't seem to scan further than perhaps 4-5 metres (of course, that data might have been sliced off at post processing).

    Very recently, we saw the other super cool video by Andrew Barry, vision based obstacle avoidance on a plane (simple obstacles, low speed, short range but it's still awesome).

    Of course, what I 'm driving at is the Holy Grail of real time sense and avoid (won't regulators just love it). Can it sense a Cessna at 500 metres? Can it help navigate through a crowded paraglider gaggle? We know that the navigation can be computed in real time if the data is there, thanks to this 2012 gem from MIT. However, that's cheating by scanning a slice aligned with the plane of the wings and fuselage. Barry's vid is the best vision based I 've seen for something fast and far enough for planes (in a slow, forgiving environment) and uses binocular vision as well. The sensor in your vid might also be usable but, used on the vehicle it was, it's hard to say.

    I 'm wondering if this type of sensor is finally going to be the answer to all our prayers and if it's anywhere near as good, light, low power, cheap enough to start getting excited about.

  • Nice work.

    It's interesting how slow the process here is, and then compare that to the speed with which the Yuneec Typhoon navigated the forest at the CES demonstration.  Makes you wonder what's going on there.  Why such a speed differential?

    Also curious that this ETH project didn't use PX4?  Maybe AscTec funded it?  But then why open source the results?

  • @Andreas: Have  a look here: http://www.intel.com/content/www/us/en/nuc/overview.html . The latest I have is with i7. The camera system is this one: http://wiki.ros.org/vi_sensor . It has been developed at the Autonomous Systems Lab at ETH Zurich and Skybotix. No it is not only tailored to short distances. Both in terms of baseline and software it works fine outdoors. Check this video: https://youtu.be/95XGvEs9iTs The point cloud that is updated on-line in the video is also computed online (no offline software). The moment the speaker mentions post-processing (towards the end with the mesh) then this post-processed. 

    @UAS_Pilot

    Thanks!

  • Very cool.

  • So, the high-level processor is a dedicated separate board? Could you tell us which one? The vid says something about stereo cams, I assume these are time-of-flight (and presumably short range) ones, only good for indoor applications, correct? I also assume that the point clouds and meshes are generated via the stereo cams, not by photogrammetry on 2D vid feeds, correct?

  • @Gary - totally agreed :) - actually let's improve the role of the human

    @Andreas - everything is on-board. The autopilot is from AscTec. The high-level processor is based on an Intel NUC. 

  • Is all computation on board? On what hardware?

This reply was deleted.