Here's an updated version of the software-based horizon finder for SRV-1 Blackfin. An actual slope and intercept is now being computed, and some filtering has been added as well ...Here was the original post ...
Noting the interesting discussion about optical flow and horizon finders in this thread, I undertook to add a simple horizon finder to the SRV-1 Blackfin Camera firmware. The algorithm uses a basic edge detection function that is already build into the SRV-1, dividing the image into 16 columns and searching from top-to-bottom for first edge hits. From the video, it appears that the edge threshold could be set a bit lower, but the results are pretty good without any tuning or filtering.The Google Code project is here - http://code.google.com/p/surveyor-srv1-firmware/ . Next step is to add a least-squares fit to draw a line through the edge segments and then compute pitch and roll angles.
E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Very nice Howard. I must give this a go at some point!
  • In a sense, optical flow and stereo flow are functionally equivalent - you are looking for the correlation between two different camera views. The difference is that you know exactly the baseline difference between camera positions with stereo vision, while position and attitude differences between consecutive framesa is less certain with the optical flow model. In either case, ray trace correlation with a real world topographical model would be quite interesting.
  • Probably the easiest solution is to use stereo vision & feed it a priori information about all known height changes from Goog Earth. Stereo vision can tell if it's a nearby object or the horizon. A preprogrammed height map can tell if it's a mountain based on field of view & position.
  • WOW That is nice!!! I wonder if you could but that on an rc jet to act as a co-pilot. It could slightly resist turns and resist dives using the tail flaps.
  • RoboRealm is a great program - it supports many image processing algorithms which are suitable for these types of operations, plus it includes direct support for the SRV-1 Blackfin. We often see users of the Blackfin camera first modeling different approaches on a PC host running RoboRealm to figure out what approaches work best, and then recoding the approach in Blackfin firmware.
  • The RoboRealm Skyline module deals with the "false" horizons by building some model that resists spurious changes. They suggest 120 frames for the model to smooth out the noisy bogus horizons. I dont know how they did it or how you should do it, but what you've done so far is inspirational.
  • "Based on your settings, XRay Vision will recognize and highlight images where motion occurs -- even automatically send you an e-mail of that frame!" Just to develop the idea further, may be you can have your ground station (on laptop, PC, etc) compare a stored overhead still image from Google Earth (select angle, etc) against a live video feed from the air (streaming from the UAV, plane, etc.) and if its finds a match, it snaps a photo or makes course correction, etc. The recognition of the object or landscape can be pre-set any any angle, vertially down or look ahead, or scan the horizon. I assume the navigation has to be fairly accurate, and the image recognition should be set with some tolerance that would allow some room for error due to lat/lon coordinates, exact angle between UAV and object/landscape.

    However, for example, the software may be able to detect the runway in the PC/ground station displayed image at a distance ahead and try to maintain the UAV/plane on the centerline for landing.
  • Wonder is there are any free software out there that could similarly "evaluate" any fed/displayed video picture and detect horizon and obstacles. Such as the several motion detecting softare (like "X-ray vision" from X-10 based security) which can scan and "make decision" (by sending email) when it detects movement in the PC displayed picture.
  • The algorithm is based on what I showed in the first video of dividing each frame into column and looking from top down at transitions within each column based on a common threshold (the actual edge filter is Sobel). So basically any obstacle can create a false trigger (birds, clouds, vapor trails, etc), though changes in thresholds and other filters can help to overcome some of the issues. However, to be practical, I think the algorithm needs to be a lot smarter in understanding what it sees, thus my earlier comment about computing depth of field. An algorithm that could divide the frame into depth layers and then compute slope and intercept for each layer would get us a lot closer to something usable.
  • good work.
    -what happens if you have a cloud(s) in the picture - say clouds are in the middle of the sky (horizontally) and half way between top of the picture and real horizon (vertically)? How will your algo handle that
    - are you finding edges using rising or falling edges, or best edge?
    -are there any effects on edge detection (in your algorithm) from light conditions (just after sunrise or before sunset vs. middle of the day)?
This reply was deleted.