I’m looking for a straightforward solution to have a quad land on a small outdoor target area of 1ft x 1ft. Fully autonomous. Acceptable accuracy would be +/- 2 inches of the target center.
We know the GPS location of the landing site, which the quad will use as a waypoint to get to the area and start hovering at altitude. Then for the actual precision landing and descent, a visual marker could be used, ideally I wouldn’t need anything electronic on the ground to make things simple (unless it's really cheap as in less than $50 and leads to a much simpler solution). Also all computing systems must be onboard the quad.
I was thinking of combining a Pixhawk, a Raspberry Pi 3 and its V2 camera module (8MP) to do computer vision with OpenCV. I would like to keep things simple and if possible limit the image recognition to basically a color mask + find contours. First the Pixhawk will take the quad to the GPS location. Then in “locate & descend” mode the RPi3 would start scanning and feed the Pixhawk with (x,y,z) velocity vectors to get closer to target and land.
Will this be good enough? Any potential roadblocks that I should anticipate?
While searching on the forum I found a UC Berkeley project  that seems related although it’s 2 years old. I also came across the PX4Flow work but I’m hoping I can do without.