It is true that we use the Vicon system to track all of the quadrotors in the video. However, this isn't about just tracking - the main focus is to generate and follow time critical trajectories that avoid collisions with static and moving obstacles. We ran the vicon feedback loop at 100Hz here, though the actual tracking system could work at up to 300Hz, I believe. Also, the point was to show that different quadrotors may have different capabilities and that software (planning) should take into account what each of the platform can/can't do
I've always found demonstrations with Vicon systems fascinating. It is true that you won't find these systems in general environments. But these systems show just what can be done if the vehicle's state is precisely known.
The Vicon systems themselves are quite sophisticated- if I am correct each "camera" uses an image sensor specifically designed to track those "dots" and also has an array of LEDs for precise flashing of light to illuminate them. I believe the frame rates are up to 500Hz or so, thus there are really no lag issues.
These systems are too expensive for general use and it could take a few Moore's Law steps to bring their cost down for use by the "every day guy/gal", but I wonder if a reduced precision version could be made that, when coupled with the right inertial or other sensing on the vehicle, could provide similar capability.
I don't think that at all detracts from the usefulness or value of the work that they are doing. I see it as a separate issue. Other teams can find solutions to better positioning sensing, and others are working on better environment sensing. Optical flow, kinect, lidar, and other forms of optical processing are all making good strides in MAVs. It is only a matter of time before we can combine the work demonstrated here with some of those other technologies.
If the quads can solve their own positions with greater accuracy and precision, then they could mesh-communicate them. And that would allow for similar results without the external systems. But personally, I think that until they have good obstacle/environment sensing, it is better as a lab toy or high-mainteance over-smart RC vehicle than a reliable autonomous work platform. And the same could be said of any robot, including the fantastic work done here. The real world is a messy, changing place full of things that smash our beautiful machines into pieces.
There are two reasons that UAV projects really captured my attention; first, for aerial imagery, because no one has the data I want, and they want to charge more than I care to pay for the frequency of new imagery I need, and second, that in most areas, if you get 100 to 200 feet off the ground, you can virtually ignore obstacles, because there are none.
This is the major change, for me, over the robots I built years ago, following lines, signals, lights, and expressing basic behaviors. They had very basic obstacle detection, avoidance, and mapping ... but the cluttered environment they were resigned to make them more ineffective as a work platform. It hobbles them.
By pushing things into the air, we can ignore this limitation and get around... which allows us to get more functionality from them. But in the end, solid environmental awareness and obstacle awareness will become a key game changer for the "working" (flying) robot. In the mean time, teams like the one in the video are fleshing out algorithms that will help make the robots useful in messy real world situations when that environment sensing problem is solved.
Yes, I've seen videos from them before I believe it was at the sparkfun page, and this is awsome. They have 20 or so cameras across the room to calculate trajectories and positions, but they must have also some very good algorithms for that. :)
The motion capture cameras that provide location information about the copters are stationary in the environment. So this technique only works in locations that are setup for it (cameras installed and calibrated). I'm not sure if the technology is mature enough but I'd like to see cameras mounted on the copters to provide location information relative to the environment so the technique could be used anywhere. Something more like this, but small enough to fit on a copter.
Gent, they use an external tracking system delivering high accuracy at high frequency (200Hz or up)(Search for Vicon). With this data you have a good basis for precise control.
And this for me is the real impressive part. They must have a real good dynamic model running in the background for the trajectory planning and control! Thats constant developement over years...
Comments
It is true that we use the Vicon system to track all of the quadrotors in the video. However, this isn't about just tracking - the main focus is to generate and follow time critical trajectories that avoid collisions with static and moving obstacles. We ran the vicon feedback loop at 100Hz here, though the actual tracking system could work at up to 300Hz, I believe. Also, the point was to show that different quadrotors may have different capabilities and that software (planning) should take into account what each of the platform can/can't do
Beautiful !!!!
I've always found demonstrations with Vicon systems fascinating. It is true that you won't find these systems in general environments. But these systems show just what can be done if the vehicle's state is precisely known.
The Vicon systems themselves are quite sophisticated- if I am correct each "camera" uses an image sensor specifically designed to track those "dots" and also has an array of LEDs for precise flashing of light to illuminate them. I believe the frame rates are up to 500Hz or so, thus there are really no lag issues.
These systems are too expensive for general use and it could take a few Moore's Law steps to bring their cost down for use by the "every day guy/gal", but I wonder if a reduced precision version could be made that, when coupled with the right inertial or other sensing on the vehicle, could provide similar capability.
what a beautifull ballet, so precise
I don't think that at all detracts from the usefulness or value of the work that they are doing. I see it as a separate issue. Other teams can find solutions to better positioning sensing, and others are working on better environment sensing. Optical flow, kinect, lidar, and other forms of optical processing are all making good strides in MAVs. It is only a matter of time before we can combine the work demonstrated here with some of those other technologies.
If the quads can solve their own positions with greater accuracy and precision, then they could mesh-communicate them. And that would allow for similar results without the external systems. But personally, I think that until they have good obstacle/environment sensing, it is better as a lab toy or high-mainteance over-smart RC vehicle than a reliable autonomous work platform. And the same could be said of any robot, including the fantastic work done here. The real world is a messy, changing place full of things that smash our beautiful machines into pieces.
There are two reasons that UAV projects really captured my attention; first, for aerial imagery, because no one has the data I want, and they want to charge more than I care to pay for the frequency of new imagery I need, and second, that in most areas, if you get 100 to 200 feet off the ground, you can virtually ignore obstacles, because there are none.
This is the major change, for me, over the robots I built years ago, following lines, signals, lights, and expressing basic behaviors. They had very basic obstacle detection, avoidance, and mapping ... but the cluttered environment they were resigned to make them more ineffective as a work platform. It hobbles them.
By pushing things into the air, we can ignore this limitation and get around... which allows us to get more functionality from them. But in the end, solid environmental awareness and obstacle awareness will become a key game changer for the "working" (flying) robot. In the mean time, teams like the one in the video are fleshing out algorithms that will help make the robots useful in messy real world situations when that environment sensing problem is solved.
Yes, I've seen videos from them before I believe it was at the sparkfun page, and this is awsome. They have 20 or so cameras across the room to calculate trajectories and positions, but they must have also some very good algorithms for that. :)
The motion capture cameras that provide location information about the copters are stationary in the environment. So this technique only works in locations that are setup for it (cameras installed and calibrated). I'm not sure if the technology is mature enough but I'd like to see cameras mounted on the copters to provide location information relative to the environment so the technique could be used anywhere. Something more like this, but small enough to fit on a copter.
http://drp.disneyresearch.com/projects/mocap/
Thanks for sharing the video, still is cool as hell. Just wishful thinking I guess.
The guys from this lab always do amazing things. There are already many their videos on YouTube, and they continue to surprise us with the new ones!
Gent, they use an external tracking system delivering high accuracy at high frequency (200Hz or up)(Search for Vicon). With this data you have a good basis for precise control.
And this for me is the real impressive part. They must have a real good dynamic model running in the background for the trajectory planning and control! Thats constant developement over years...
I've seen this like a million times already and it still amazes me. How do they do it? The quadcopters are so stable.