An autonomous RC car guided by TensorFlow machine vision

demo gif

The blue line is what the model thinks it should do, the green line is what I actually did when a human was steering it.

From Hackaday, a great project showing how to create a self-driving R/C car that can follow a complex road patter.... It uses TensorFlow running on an Intel processor onboard. Click through to read more about the importance of polarizing filters and how to implement TensorFlow.

Unexpectedly they have eschewed the many ARM-based boards as the brains of the unit, instead going for an Intel NUC mini-PC powered by a Core i5 as the brains of the unit. It’s powered by a laptop battery bank, and takes input from a webcam. Direction and throttle can be computed by the NUC and sent to an Arduino which handles the car control. There is also a radio control channel allowing the car to be switched from autonomous to human controlled to emergency stop modes.

They go into detail on the polarizing and neutral density filters they used with their webcam, something that may make interesting reading for anyone interested in machine vision. All their code is open source, and can be found linked from their write-up. Meanwhile the video below the break shows their machine on their test circuit, completing it with varying levels of success.

Views: 2835

Comment by Patrick Poirier on October 10, 2016 at 6:03pm

Excellent Blog: They explain in detail how they trained their vehicle.

Who will train a racer drone using the same technique ??  I am tempted...

It requires a lot of onboard processing: Inten NUC – The raspberry pi doesn’t really have enough power and is arm based.  An x86 based processor like the i5 in our NUC is much easier to use for machine learning purposes.  The exact one you use doesn’t matter.

Comment by Jiro Hattori on October 10, 2016 at 8:38pm

The cost of i5-NCU and Tegra is almost same. Which one is your recommendation or like for this purpose?


3D Robotics
Comment by Chris Anderson on October 10, 2016 at 10:07pm

I'm leaning more towards the Tegra

Comment by Patrick Poirier on October 10, 2016 at 10:25pm
Getting Tensorflow installed on an Intel is easy compared to this : http://stackoverflow.com/questions/39783919/tensorflow-on-nvidia-tx1/
And check the date , october 6

3D Robotics
Comment by Chris Anderson on October 10, 2016 at 10:31pm

Yikes. Intel it is!

Comment by Patrick Poirier on October 10, 2016 at 10:41pm
With the new jetpack 2.3 it should be twice faster , but no benchtests so far. That is basically the problem with the TX1, on paper it is a fantastic device, but practically it is lacking maturity.
Comment by Jiro Hattori on October 10, 2016 at 11:27pm

Thank you for interesting input.

May be TX1 is good to go when you intend to fly that thing, but Intel is robust while it moves on the ground:-)

Comment by Jack Crossfire on October 11, 2016 at 4:23pm

What it's probably doing is recalling the steering commands from a library of past images rather than interpreting the position of the lines.  Tensorflow is the new SURF.  Despite getting a lot farther than mistaking his laptop for a toaster, he is not in Inc Magazine's top 30 under 30 so no buyout.

Comment by Patrick Poirier on October 11, 2016 at 5:04pm

Jack,  Tensorflow is using a library of past images to train a neural network, so that the vehicle is driven by decision based on an Artificial Intelligence system that is quit close to the existing self driven cars.

This example may look like a glorified line follower that you can be build using simple analog comparators, but the fundamental difference lies in the backend that is a  totally a new concept.

Tensorflow, or any well designed neural networks have the potential to control a racing drone and win a race once we can get enough processing power airborne , and cleverly trained a drone steering dataset.

 

Comment by Global Innovator on October 12, 2016 at 3:42pm

"

The blue line is what the model thinks it should do,

the green line is what I actually did ??????

when a human was steering it."

when you split the animated gif into individual image / video frames with EzGif

http://ezgif.com/split/d29c7f3b5f.gif

you have impression this video / animated gif has been postedited

gif image data extracted by EzGif

File size: 3.29M, width: 311px, height: 251px, frames: 145, type: gif

but you can clearly see frame 0 is repeated 10 times

frame 1: 3 times

frame 2: is missing

..

the last frames are the following

588,

592,

597,

601,

604, repeated 5 times

frame: 577

you can clearly see, algorithm requires some modifications (more training does nothing for better),

since obstacle (right boundary of the road) is too close to the right and obstacle-free horizon is far to the left

alike control error can be seen in frames:

544, 547, 535, 522, 468,  364, 369, 355, 340, 322, 311, 136, 131, 127, 122, 107, 103,  frame 0 (???)

frame: 202 - human error

I am really sorry, but this implementation of the algorithm has a great potential to crash this model car 10-20 times.

It's quite clear, algorithm adopted fails to look for the furthest clear horizon not to say about any prediction adopted to follow in advance road boundaries turning left or right.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service