So I think we can all agree that a reliable detect-and-avoid system would indeed be the holy grail of the UAV world. The current debacle of UAV regulation hinges on the fact that UAVs indeed would not be able to avoid collisions as they would not be able to sense the presence of other traffic.
With the current technology at hand, it seems to me that vision based systems present the only option for small UAVs due to both cost and weight constraints of radar systems. There are already some research centers that are doing work on this such as these guys:
I am also compelled to start research in this area both due to personal interest and necessity. Having Project Andromeda puts me in a good position where we can develop and test the system on a reliable platform.
I am thinking whether or not an Open-Source version of the same system would be possible. I would set-up the project and provide hosting (most likely as part of the PA website). The target demographic would be people who are interested in digital image processing, SLAM systems and vision based spacial tracking and general robotics.
A great deal of work has already been done as part of the Project Andromeda effort and I will ensure separation of PA code and work done on this project to ensure license integrity (as PA is not under an Open Source license).
Here's a short description of the proposed system:
The picture above is the main camera of the Project Andromeda platform, the Sony FCB-EX20DP.
The PA platform also includes a 1.4Ghz embedded PC with an on-board frame grabber that captures the frames from the FCB-EX20DP in compressed and raw form for transmission to the ground and for processing. The camera is attached to a PT gimbal which allows it to be stabilized. I'm currently using openCV to research different methods.
The proposed system would use a modification of the current Project Andromeda PTZ camera to scan the skies for traffic. It would used dense optical flow fields coupled with feature detection to spot objects moving against the background. It would then use Kalman Filters to track them in 3D space and using mono-SLAM to obtain range values if possible. An alternate version could use a lightweight laser range-finder coupled with the main camera to obtain accurate ranging for objects that are close.
The end result is a system that maintains a repository of objects by tracking them in the surrounding 3D space. The zoom ability allows the system to quickly obtain close-up images of each object. This could either be used to notify a ground-based controller or to match up against an onboard database of objects.
So, this is obviously something I've been thinking about for a while. It could go two ways, I would either develop it closed-source as part of PA, or it would be an open source project that would rely in PA and other similar projects for air-time and reliability testing. The end result would be a Open Source system comprising of the mechanical, electronic and software blueprints necessary for operation.
Is anyone interested in something like this? If you are, or any other thoughts please shout-out below. I'll be following this carefully to see if it's at all feasible as an Open Source project.
Comments
Here is the FAA final rule on ADS-B.
Tom
Keep in mind that any "sense and avoid" system would need to be certified if you want it to fly in the NAS. I doubt there will ever be an open source product certified.
My $0.02.
If you are interested and capable in image processing, vision based systems and anything related - and are interested in something like this, please shout out. I'll see how many people are interested and then we can discuss the details on the forums.
The idea of an open source project to implement a 'see and avoid' capability for amateur UAV's should get a lot of support. We have limited equipment, budget, and capabilities - but we do have the need for such a system. We are willing to contribute ideas, critical review, and possibly, system or component testing.
Let's hope for a successful project.
Roborealm also supports image tracking.
Of coarse the OTHER plane must have a transponder also.
Earl
Sub pixel disparities are done:
http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBc...
My plan is to develop a scalable system, so a hemisphere of sensors can be used in conjunction for both long range and short range. Look at Geof's setup (although for optical flow):
http://centeye.com/graphics/slides/MAOS2.JPG
I am using Firefly Small sensors:
http://centeye.com/pages/products/products.html
I have looked at both SURF and SIFT and intend to use them for the feature detection component. However this is just an idea. The premise is to promote collaboration, set up a website and repository and also generate awareness of the project through Project Andromeda and other media.