Imaging and mapping recommendations for fixed-wing UAV

I am part of an engineering team that uses a 2-metre UAV for aerial surveying. Our current imaging platform is the Canon G9 point-and-shoot camera connected to a Raspberry Pi 1. Under Linux, gphoto2 sends trigger commands to the camera over USB. The Pi is connected to the autopilot (formerly an ArduPilot Mega board, now a Pixhawk) to retrieve telemetry with which to tag the images. The images are sent to the ground over a WiFi connection separate from the autopilot link.

We have several problems with this setup:

1. gphoto2 is too slow to capture. In continuous shooting mode, we get an image at best every 4 seconds (or 0.25 fps). The camera itself can do 0.8 fps for a long time when disconnected, using burst mode. Flying at 20 m/s at an altitude of 100 m, we would need less than 3 seconds between images to obtain a suitable overlap of 30% using this specific lens. I put CHDK on the camera but had trouble controlling it over USB. If CHDK is recommended for triggering then I will try again. Any other triggering methods recommended? We could change cameras, but the lighter the better. A DSLR takes us to the very limit of our flight envelope. Industrial cameras are a good option but very expensive.

2. Our control software is outdated and does not synchronize the telemetry and photo capture accurately. I am writing a new program to take advantage of our new Raspberry Pi 2 and provide proper synchronization. Any recommendations for this? We looked at using the hot shoe on the G9 to know the moment of capture. We also tried Mission Planner geotagging with time offset from the log, but it is very desirable to get geotagged images from the UAV in flight without waiting to land.

3. The image compositing software I wrote needs some work. Currently it georeferences the images based on GPS, altitude, and attitude. It then warps the images to correct distortion. In QGIS, the images are overlaid together based on GPS only. No stitching or edge detection is done. Obviously this could be improved. We are investigating Pix4D. What techniques or software are recommended for this? Continuous processing (add additional images to the composite automatically as they come down from the UAV) would be a huge bonus. If done in one batch instead, processing time is important to us.

4. Our GPS might not be good enough for this application. We are using the 3DR uBlox GPS module which updates at 5 Hz and is accurate to about 2.5 m. Should we replace this with something better? Our goal is to generate georeferenced, possibly orthorectified composites with better than 0.25 m pixel resolution. In ground testing, we can get +/- 1 m GPS accuracy more than half the time.

Thanks.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Activity