Hello guys,i need some help and advice. i've a mission in a competition. The mission state that my UAV have to scan and detect a 'X" target on the ground and loiter b4 release a sand bomb onto the target. i've seen that this mission need image processing by a camera and send the data back to APM to command the rotor to fly and loiter, then command a servo to release a bomb. Image processing can done by using OpenCV on ground station. but i wonder how the camera can transfer the real time image data to OpenCV and the OpenCV transfer to APM as an input to the servo??
my option is using the video telemetry for transferring the real time image data to openCV. is it the correct way?
but how the openCV data transfer to APM??
Anyone can give some ideas??
Replies
I have been looking at a similar use case, where a copter could LAND on a visual target, based on control inputs generated from computer vision on an onboard camera. That would be an amazing transitional-technlology capability, useful for deliveries, before Lidar and other advanced sensors come out of the labs.
There is this sensor board, which might be usable to do the processing on board:
https://store.3drobotics.com/products/px4flow
The NKE Hunter UAV is not a copter, but it does vision-based target drops (Supplies, not sand bombs in this case - but from the UAV's perspective the difference is minor :-). The Hunter is based on open source stacks, including OpenCV.
http://www.novemberkiloecho.com/robots/hunter-synthetic-vision-robo...
Regards,
Knut