Dear all,

I am happy to share some recent test on vision based navigation that would be running on a companion computer in near future.

We ported SegNet (http://mi.eng.cam.ac.uk/projects/segnet/) from Univ. of Cambridge onto NVIDIA Jetson TX1, run it in real-time initially gives us ~0.5FPS for the segmentation thread, and we think with further optimization there is still some room for improvement.

A tablet with HDMI output playing a YouTube clip is acting as camera input, which goes into Jetson TX1 via a HDMI->USB3.0(UVC) converter. The camera thread runs in ~15FPS with 1080p, which is likely limited by the USB3.0 controller performance of Jetson TX1. With MIPI camera it could be raised much more.

It is quite an impressive performance for an embedded chip, which would enable the vision based navigation to come true soon. The test above is using SegNet pre-trained driving model, meanwhile we are working on model training focused on aerial drones.

Screen captured using external recorder with HDMI input.

3689678913?profile=original

Best

Kai

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Developer

    Looking good.  The proof will be when we run it on a drone and use DroneKit to control the drone or the gimbal.

This reply was deleted.