I didn't expect that fast response on uploading a picture otherwise i would have included a little more information. As i didn't know if anyone read it if i commented on my own picture, i figured i should make it a blog post.The picture was ment to be a reference of a prior project. I am currently looking for videos or fast image sequences taken onboard a UAV. The goal of that is to evaluate the feasibility of some computer vision algorithms for use onboard a UAV. I wrote a post in the forum about it and would be super glad for any data provided by this community. In return i'll keep you posted about any progress. (in case there is none and it would be too embarrassing i might not ever mention it again though)The board is a stereo vision system for mobile robots i designed two years ago. Due to the four layer board, rather good lenses, and custom made lens mounts it's worth around 500 Euros paid for by the Hamburg University of Technology (It was before i spent my year abroad in Berkeley. Unfortunately i'm back home again.) The distance choice was rather random, designed to fit on a euro board (100 x 160 mm) and for rather short distances as the mobile robot was intended for indoor use.Some specs:-It uses two identical VGA Kodak CMOS image sensors-A Spartan3 FPGA with 32 MB of SDRAM and 4 MB of SPI Flash drives the image sensor and is intended for some low level image processing (like undistortion etc)-A Blackfin BF532 (you were right) running at 400 MHz with 32 MB SDRAM and 4 MB SPI Flash then does the stereo processing of the image data-Computed data is transmitted over ethernet using some Microchip SPI LAN controller-board is four-layer as mentioned, lens mounts are custom made CS mounts, the lenses have a fixed 4 mm focal length-The power consumption is moderate. The FPGA and the LAN controller burn relatively much power but it's still a dimension you might provide with batteries.Here some results. I know they are not brilliant but it's just the first attempt with a rather simple algorithm. (I added them anyways as it wouldn't be complete without it) Brighter gray corresponds to closer Objects.A note about that kind of stereo vision for use in UAVs: I might be wrong, but to me it seemed like a not so feasible approach due to the relatively high distance to objects on the ground. The base length needed to get reasonable spatial resolution even with high resolution sensors would probably be too large (even if the two cameras were mounted on the tip of each wing). I'm trying to dive deeper into structure from motion techniques with one camera hoping to get some results while keeping the overall system complexity about the level of an ambitious hobbyist project.Bye, Jørn
Thank you for recommending those videos. Especially the last one is of rather good quality.
Unfortunately the videos i'm looking for should be a little different. FPV videos are not well suited as i'm looking for videos from a fixed camera so that i can derive information about the orientation of the aircraft from the video. Also the algorithm can't deal well with translations in viewing direction and huge differences in distance like the one occurring when facing the horizon.
I guess the videos i'm looking for - image sequences of the ground taken with a lens with a high focal length on a fixed, downward facing camera - are probably that boring that people either don't take those or don't upload those for the world to see.
The lens mounts were custom made for that board. Essentially they are rings of solid aluminum with some thread for the lenses and four holes for 2.5mm screws in the ring. They are screwed to the board resulting in reasonable accuracy for aligning sensor and lens centers.
Thanks for the tip with rcgroups.com. I already searched youtube but so far haven't found any good video there.
The Mono SLAM approaches i took a look into didn't look feasible to me due to the somewhat special projection conditions for aerial videos. But maybe i'm wrong with that.
P.S. for videos and images you can probably search rcgroups.com or other sites for FPV videos (or just google "FPV video" ).
For one camera solutions look into Mono SLAM area of research (if you haven't already done so)
Yes, it is our camera. For the anaglyph, the images were not rectified. We are actively in process of integrating a sparse map algorithm with rectification. The algorithms are found here - http://code.google.com/p/sentience/, though I can probably give you a more direct link in 1-2 weeks.
@Chris Yeah, i agree, the indoor blimp seems like the perfect airborne application for stereo vision. Not only the relatively small distanced but also the rather small velocity really make it a feasible approach for collision avoidance. The current version is definitely too heavy for that but with a plastic lenses and lens mounts and a much smaller pcb it could probably be used.
@bGatti The single camera is the approach i'm trying to get started with. Btw i'm still looking for videos and image sequences, so if anyone doesn't mind sharing...
The two UAV idea doesn't seem too promising, i think (though there are probably a lot of similarities to the structure from motion approach with only one camera). Calibrating even a rigid stereo setting from an arbitrary scene isn't a too easy task, so doing it in real time with multiple plans while communicating video streams probably isn't either.
@Jack Ok, i agree, the pictures aren't too great. But as i said, they were done with the simplest algorithm without doing any filtering afterwards. The dark leg appearing further than the shoe isn't actually a real problem. This is only due to the settings of my algorithm. It scans through a given disparity range, trying to match windows in the two images. In this case i just provided a range smaller than the one actually found in the image, so it was impossible for it to match the leg. It's not the first stereo camera i've built and from version to version i've realized that the lens and sensor quality isn't of that great importance. There is a great calibration toolkit (i think by Caltech) that can deal with some amount of distortion.
@Howard I started designing the hardware three years ago, so by the time i found your (is it actually yours?) stereo camera i had already assembled the board. I was all excited about the project as it uses the Blackfin as well and from time to time i visit the page to find out if there are any calculated disparity images to see.
I was wondering about the composite anaglyph video. It doesn't actually look rectified to meet epipolar constraints or am i wrong with that?
Joern's FPGA should give better results for stereo disparity than our dual processors, and combination of FPGA with programmable processor is nice. In any case, I agree that aerial application for stereo is pretty limited except at low relatively altitude. Structure from motion with single camera would seem to be a more practical approach for uav.
A shoe that is closer than a leg could be a problem. Building a custom HD 3D camera is something everyone wants to do, but machining a lens mount with after tax personal income is virtually impossible. EF lenses, not C mount lenses.
Fpr spatial resolution from the air at great distances, one might use only a single camera, and take separate shots at separate times. Google is building 3d maps of cities, a proper uav could accomplish this feat rather cheaply.
For real time 3d, one could use two AUV flying near each other - probably a holy grail the killer class of every country would want to use.
Finally, for the critical rubber-meets the road challenge - ie landing - this arrangement might have enough resolution - though I can think of faster means of determining height of the runway.
Comments
Unfortunately the videos i'm looking for should be a little different. FPV videos are not well suited as i'm looking for videos from a fixed camera so that i can derive information about the orientation of the aircraft from the video. Also the algorithm can't deal well with translations in viewing direction and huge differences in distance like the one occurring when facing the horizon.
I guess the videos i'm looking for - image sequences of the ground taken with a lens with a high focal length on a fixed, downward facing camera - are probably that boring that people either don't take those or don't upload those for the world to see.
https://www.youtube.com/watch?v=aBChrryIUsw
more from this user (3 fpv videos) :
https://www.youtube.com/user/acotv
quick search on vimeo shows bunch of videos:
http://www.vimeo.com/videos/search:fpv
also google video:
http://video.google.com/videoplay?docid=-9019522859468985819&hl=en
hope this helps
Thanks for the tip with rcgroups.com. I already searched youtube but so far haven't found any good video there.
The Mono SLAM approaches i took a look into didn't look feasible to me due to the somewhat special projection conditions for aerial videos. But maybe i'm wrong with that.
For one camera solutions look into Mono SLAM area of research (if you haven't already done so)
how did you affix camera / lens mount to the PCB board?
Yes, it is our camera. For the anaglyph, the images were not rectified. We are actively in process of integrating a sparse map algorithm with rectification. The algorithms are found here - http://code.google.com/p/sentience/, though I can probably give you a more direct link in 1-2 weeks.
@bGatti The single camera is the approach i'm trying to get started with. Btw i'm still looking for videos and image sequences, so if anyone doesn't mind sharing...
The two UAV idea doesn't seem too promising, i think (though there are probably a lot of similarities to the structure from motion approach with only one camera). Calibrating even a rigid stereo setting from an arbitrary scene isn't a too easy task, so doing it in real time with multiple plans while communicating video streams probably isn't either.
@Jack Ok, i agree, the pictures aren't too great. But as i said, they were done with the simplest algorithm without doing any filtering afterwards. The dark leg appearing further than the shoe isn't actually a real problem. This is only due to the settings of my algorithm. It scans through a given disparity range, trying to match windows in the two images. In this case i just provided a range smaller than the one actually found in the image, so it was impossible for it to match the leg. It's not the first stereo camera i've built and from version to version i've realized that the lens and sensor quality isn't of that great importance. There is a great calibration toolkit (i think by Caltech) that can deal with some amount of distortion.
@Howard I started designing the hardware three years ago, so by the time i found your (is it actually yours?) stereo camera i had already assembled the board. I was all excited about the project as it uses the Blackfin as well and from time to time i visit the page to find out if there are any calculated disparity images to see.
I was wondering about the composite anaglyph video. It doesn't actually look rectified to meet epipolar constraints or am i wrong with that?
Yes - indoor blimp is a good application for this because of the distances. We used stereo camera to produce this composite anaglyph video from YARB -
3d view from YARB indoor blimp from Surveyor Corporation on Vimeo.
Joern's FPGA should give better results for stereo disparity than our dual processors, and combination of FPGA with programmable processor is nice. In any case, I agree that aerial application for stereo is pretty limited except at low relatively altitude. Structure from motion with single camera would seem to be a more practical approach for uav.
For real time 3d, one could use two AUV flying near each other - probably a holy grail the killer class of every country would want to use.
Finally, for the critical rubber-meets the road challenge - ie landing - this arrangement might have enough resolution - though I can think of faster means of determining height of the runway.