From the MIT press release:
MIT’s Robust Robotics Group — which fielded the team that won the last AUVSI contest — has set itself an even tougher challenge: developing autonomous-control algorithms for the indoor flight of GPS-denied airplanes. At the 2011 International Conference on Robotics and Automation (ICRA), a team of researchers from the group described an algorithm for calculating a plane’s trajectory; in 2012, at the same conference, they presented an algorithm for determining its “state” — its location, physical orientation, velocity and acceleration. Now, the MIT researchers have completed a series of flight tests in which an autonomous robotic plane running their state-estimation algorithm successfully threaded its way among pillars in the parking garage under MIT’s Stata Center.
“The reason that we switched from the helicopter to the fixed-wing vehicle is that the fixed-wing vehicle is a more complicated and interesting problem, but also that it has a much longer flight time,” says Nick Roy, an associate professor of aeronautics and astronautics and head of the Robust Robotics Group. “The helicopter is working very hard just to keep itself in the air, and we wanted to be able to fly longer distances for longer periods of time.”
With the plane, the problem is more complicated because “it’s going much faster, and it can’t do arbitrary motions,” Roy says. “They can’t go sideways, they can’t hover, they have a stall speed.”
Found in translation
To buy a little extra time for their algorithms to execute, and to ensure maneuverability in close quarters, the MIT researchers built their own plane from scratch. Adam Bry, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro) and lead author on both ICRA papers, consulted with AeroAstro professor Mark Drela about the plane’s design. “He’s a guy who can design you a complete airplane in 10 minutes,” Bry says. “He probably doesn’t remember that he did it.” The plane that resulted has unusually short and broad wings, which allow it to fly at relatively low speeds and make tight turns but still afford it the cargo capacity to carry the electronics that run the researchers’ algorithms.
Because the problem of autonomous plane navigation in confined spaces is so difficult, and because it’s such a new area of research, the MIT team is initially giving its plane a leg up by providing it with an accurate digital map of its environment. That’s something that the helicopters in the AUVSI challenges don’t have: They have to build a map as they go.
But the plane still has to determine where it is on the map in real time, using data from a laser rangefinder and inertial sensors — accelerometers and gyroscopes — that it carries on board. It also has to deduce its orientation — how much it’s tilted in any direction — its velocity, and its acceleration. Because many of those properties are multidimensional, to determine its state at any moment, the plane has to calculate 15 different values.
That’s a massive computational challenge, but Bry, Roy and Abraham Bachrach — a grad student in electrical engineering and computer science who’s also in Roy’s group — solved it by combining two different types of state-estimation algorithms. One, called a particle filter, is very accurate but time consuming; the other, called a Kalman filter, is accurate only under certain limiting assumptions, but it’s very efficient. Algorithmically, the trick was to use the particle filter for only those variables that required it and then translate the results back into the language of the Kalman filter.
Confronting doubt
To plot the plane’s trajectory, Bry and Roy adapted extremely efficient motion-planning algorithms developed by AeroAstro professor Emilio Frazzoli’s Aerospace Robotics and Embedded Systems (ARES) Laboratory. The ARES algorithms, however, are designed to work with more reliable state information than a plane in flight can provide, so Bry and Roy had to add an extra variable to describe the probability that a state estimation was reliable, which made the geometry of the problem more complicated.
Paul Newman, a professor of information engineering at the University of Oxford and leader of Oxford’s Mobile Robotics Group, says that because autonomous plane navigation in confined spaces is such a new research area, the MIT team’s work is as valuable for the questions it raises as the answers it provides. “Looking beyond the obvious excellence in systems,” Newman says, the work “raises interesting questions which cannot be easily bypassed.”
But the answers are interesting, too, Newman says. “Navigation of lightweight, dynamic vehicles against rough prior 3-D structural maps is hard, important, timely and, I believe, will find exploitation in many, many fields,” he says. “Not many groups can pull it all together on a single platform.”
The MIT researchers’ next step will be to develop algorithms that can build a map of the plane’s environment on the fly. Roy says that the addition of visual information to the rangefinder’s measurements and the inertial data could make the problem more tractable. “There are definitely significant challenges to be solved,” Bry says. “But I think that it’s certainly possible.”
Comments
Does anybody know something more about this particular "The super fast, 40hz laser rangefinder"?
Is it something easy available to purchase?
The onboard video at the end is amazing.
The super fast, 40hz laser rangefinder makes it appear to stand still for the flight computer. They can match features in the laser scan to features on an already known 3D map, to deduce the position.
"laser rangefinder and inertial sensors"