I didn't expect that fast response on uploading a picture otherwise i would have included a little more information. As i didn't know if anyone read it if i commented on my own picture, i figured i should make it a blog post.
The picture was ment to be a reference of a prior project. I am currently looking for videos or fast image sequences taken onboard a UAV. The goal of that is to evaluate the feasibility of some computer vision algorithms for use onboard a UAV. I wrote a post in the forum about it and would be super glad for any data provided by this community. In return i'll keep you posted about any progress. (in case there is none and it would be too embarrassing i might not ever mention it again though)
The board is a stereo vision system for mobile robots i designed two years ago. Due to the four layer board, rather good lenses, and custom made lens mounts it's worth around 500 Euros paid for by the Hamburg University of Technology (It was before i spent my year abroad in Berkeley. Unfortunately i'm back home again.) The distance choice was rather random, designed to fit on a euro board (100 x 160 mm) and for rather short distances as the mobile robot was intended for indoor use.
-It uses two identical VGA Kodak CMOS image sensors
-A Spartan3 FPGA with 32 MB of SDRAM and 4 MB of SPI Flash drives the image sensor and is intended for some low level image processing (like undistortion etc)
-A Blackfin BF532 (you were right) running at 400 MHz with 32 MB SDRAM and 4 MB SPI Flash then does the stereo processing of the image data
-Computed data is transmitted over ethernet using some Microchip SPI LAN controller
-board is four-layer as mentioned, lens mounts are custom made CS mounts, the lenses have a fixed 4 mm focal length
-The power consumption is moderate. The FPGA and the LAN controller burn relatively much power but it's still a dimension you might provide with batteries.
Here some results. I know they are not brilliant but it's just the first attempt with a rather simple algorithm. (I added them anyways as it wouldn't be complete without it) Brighter gray corresponds to closer Objects.
A note about that kind of stereo vision for use in UAVs: I might be wrong, but to me it seemed like a not so feasible approach due to the relatively high distance to objects on the ground. The base length needed to get reasonable spatial resolution even with high resolution sensors would probably be too large (even if the two cameras were mounted on the tip of each wing). I'm trying to dive deeper into structure from motion techniques with one camera hoping to get some results while keeping the overall system complexity about the level of an ambitious hobbyist project.