Dear all,
I am happy to share some recent test on vision based navigation that would be running on a companion computer in near future.
We ported SegNet (http://mi.eng.cam.ac.uk/projects/segnet/) from Univ. of Cambridge onto NVIDIA Jetson TX1, run it in real-time initially gives us ~0.5FPS for the segmentation thread, and we think with further optimization there is still some room for improvement.
A tablet with HDMI output playing a YouTube clip is acting as camera input, which goes into Jetson TX1 via a HDMI->USB3.0(UVC) converter. The camera thread runs in ~15FPS with 1080p, which is likely limited by the USB3.0 controller performance of Jetson TX1. With MIPI camera it could be raised much more.
It is quite an impressive performance for an embedded chip, which would enable the vision based navigation to come true soon. The test above is using SegNet pre-trained driving model, meanwhile we are working on model training focused on aerial drones.
Screen captured using external recorder with HDMI input.
Best
Kai
Comments
Looking good. The proof will be when we run it on a drone and use DroneKit to control the drone or the gimbal.