Kai Yan's Posts (3)

Sort by

Dear all,

I am happy to share some recent test on vision based navigation that would be running on a companion computer in near future.

We ported SegNet (http://mi.eng.cam.ac.uk/projects/segnet/) from Univ. of Cambridge onto NVIDIA Jetson TX1, run it in real-time initially gives us ~0.5FPS for the segmentation thread, and we think with further optimization there is still some room for improvement.

A tablet with HDMI output playing a YouTube clip is acting as camera input, which goes into Jetson TX1 via a HDMI->USB3.0(UVC) converter. The camera thread runs in ~15FPS with 1080p, which is likely limited by the USB3.0 controller performance of Jetson TX1. With MIPI camera it could be raised much more.

It is quite an impressive performance for an embedded chip, which would enable the vision based navigation to come true soon. The test above is using SegNet pre-trained driving model, meanwhile we are working on model training focused on aerial drones.

Screen captured using external recorder with HDMI input.

3689678913?profile=original

Best

Kai

Read more…

Controling a copter by image recognition

Hi all, here we would like to share a recent progress on our image recognition based copter controller.

The system uses a tablet and a transmitter. The tablet detects the copter's position by recognizing the red circle marker painted on the copter, and try to maintain the copter's position within the center of the tablet's display (Green circled area on tablet's display) via a Bluetooth -> PPM adapter attached to the trainer port of the transmitter (Futaba 14SG). The control is a simple PID loop, using the difference of red marker and green center shown on display as Err for P, and the change of position for D, with slightly applied I. As shown in the video attached, with a fan blowing aside and deliberately deviated course by hand, once the switch is turned on, the system automatically recovers the copter into the center and maintain it there.

This is an initial try, currently it only control the X and Y on a flat plane, and we are trying to add Z (Depth) and Yaw axis by adding markers and detecting the change in marker size. The whole detection and control is done within the Tablet (Nvidia Shield Tablet), the embedded back camera of the tablet gives a good result inside the room, and we will give it a try soon in outside. The image recognition uses OpenCV, which runs in approx. 20fps (@1080) with a noticeable time-lag. The time-lag causes certain instability on the control loop, to overcome such problem, we are trying to implement an approximated delayed-feedback model into the control loop.

The copter side uses a pixhawk running APM 3.2 in AltHold mode. The system was developed for infrastructure investigation copters, e.g. under a bridge or beside buildings where GPS signal is weak and wind blows randomly, in such case one can fix the tablet on a tripod, shooting and control the copter. We met Randy some weeks ago, when he suggested us to use Mavlink instead of RC, and we would modify the system and make it even simple. This is an open source project by enRoute Co. Ltd., a member of Dronecode from Japan, we will put the source onto GitHub when the Depth and Yaw control is implemented.

Cheers,

Kai 

Read more…

Using smartphone for the brain of a drone

3689640082?profile=originalHi DIY Drones, we are happy to write here about our drone that use smartphone as the brain, and phone's display for its face! We are trying to build an API set that runs on Android, which send commands in MAVLINK over USB serial to Pixhawk.

We were researchers of optimization problems at Stanford univ. and the Univ. of Tokyo, and we study route scheduling and optimizations for robotics, using simulated annealing of Ising model. As its application, we aim to implement our algorithm onto drones. APM is our base (Thank every contributors for the amazing work!), and we need a more powerful processor and also network connectivity, instead of using a Raspberry pi, we implemented an Android phone.

In our campus test flights, nobody cared about what we study, people just think it looks cool! Then we came to the idea, why not make it a robot. The phone display and speaker are there, which can also provide visual assistance in remote rescue assistance mission, or visual alert in aerial patrol.

3689640044?profile=original

After trials and error we made the prototype takes off successfully. Meanwhile we continue to implement our research algorithm, we are also looking into OpenCV. Everything of this project is Open Source, we are also exciting to see how people utilize the advanced functionalities on a phone to make drones intelligent. We will release the hardware CAD and software source on GitHub. We setup a start-up in Palo Alto, CA and to support our research we launched the project on Kickstarter. We will incorporate 3DR Pixhawk as the main flight controller, and Android as the co-processor that provide visual look and programming platform.

Details on our Kickstarter page:

https://www.kickstarter.com/projects/labromance/lab-the-living-aerial-bot

and our website:

http://www.labromance.com/

3689640175?profile=original

Cheers!

Kai

Read more…