What we're doing at our sister site, DIY Robocars (crazy fast autonomous car racing)

Ten years ago, when I started this site, we were solving some hard technical problems in aerial robotics (such as getting drones to actually fly without crashing!). Now those problems are largely solved (and many of us went on to found companies that today use drone data, rather than making drones themselves), my inner geek took me to the next set of hard technical problems, which are mostly in autonomous cars, AI and computer vision. 

So a few years ago I founded our sister community, DIY Robocars, and today it's more than 10,000 participants around the world doing race and hackathons nearly every weekend. Above is an example (from yesterday's race in Oakland) of the sort of performance we're now seeing -- scale speeds of more than 100 MPH, with advanced computer vision, localization and navigation running at 60FPS on cars that cost less than $200. 

Views: 711

Comment by Andreas Gazis on October 22, 2018 at 11:15pm

Nice! This looks like a lot of fun.

Chris, why have we not seen similar in flying things? Is the 3D environment too complicated (compared to a flat plane populated with yellow lines and orange triangles)? Is the processing hardware too heavy?

3D Robotics
Comment by Chris Anderson on October 23, 2018 at 3:49pm

Mostly because 1) they typically fly outside, where GPS provides all the position info you need, and 2) you don't need as much positional precision in the air, where there are few obstacles. So you can be off by a meter or two and it doesn't really matter. 

Even something like the amazing Skydio drone optical SLAM doesn't have the precision you need to navigate a track at this speed.  Reducing the problem from 3D (drones) to 2D (cars) allows the visual/Lidar processing to keep up. 

Comment by Andreas Gazis on October 24, 2018 at 1:17am

I see, but position is not the only thing you need, it is the only thing we have come to rely on in the absence of a viable alternative. The holy grail for flying things remains obstacle detection at all ranges (Cessna at 1 km, branch at 2 metres). Active sensors are spectacularly bad at this. Vision is likely the only thing that can hack it as far as I can tell.

The advantages are gigantic, quads flying through an urban environment, planes landing on a dynamically evaluated spot, all kinds of vehicles doing that much trumpeted "detect and avoid" (allowing regulators to finally breathe a collective sigh of relief).

What I 'm wondering is how these cars would do in a "wild" environment. Not yellow lines and orange cones but dumped in any city street or car park or forest or whatever. The algorithm could work at 10 fps and they could go much slower, as long as they could crack the environment. Is that even possible without them having to carry a desktop stuffed with GPUs?

Comment by The Sun on October 24, 2018 at 9:13am


There are tons of SLAM algorithms that exist today that can easily solve the problem of localization to a degree that is perfectly acceptable for driving. LSD slam comes to mind as one that is very efficient. 

Perception is also a problem that most AI companies have already solved. Seeing a cessna at 1km and a tree branch at 200mm is not an incredibly challenging problem. 

The challenge is not perception or localization. It is planning. What do you do with all of that information? And more importantly, what even is a good decision / action? This is the front of most AI related research in this space. 

Comment by Mateusz Sadowski on October 24, 2018 at 9:18am

Wow! That's very impressive at those speeds! If there are ever any article on ConeSLAM or your competition then I would love to include them in WeeklyRobotics one day!

Comment by Andreas Gazis on October 24, 2018 at 11:04am

@ The Sun

Well, if these algorithms exist, I am not aware of them. In terms of planning, the best planning I have seen is the MIT airplane (2012), which uses lidar. More recently, we have this pushbroom algorithm (2015), again from MIT, which, to my understanding, is range limited. More recently, we have Dronet from ETH Zurich. Finally, we have skydio, which uses an obscene number of HD cams and is proprietary.

Action for a Cessna is fairly self explanatory, get the hell out of the (ideally observing aviation right of way rules) way and should be easy given that the 2012 MIT algorithm  could actually path plan inside an underground garage (I don't think there are that many human pilots who could fly like that). If someone can demonstrate this in a satisfactory manner, the whole sense-and-avoid headache for integrating UAVs into airspace will get a whole lot easier and people will not have to much around with exotic sensors like aerotenna.

Comment by The Sun on October 24, 2018 at 1:16pm


Do you mean you are not aware of the SLAM algorithms or the perception ones? The SLAM ones are many and easy to find.

On the perception side things tend to be deep learning based and are not publicly available. That being said they absolutely exist.

aeiou is doing exactly what you are talking about, they have a video in a blog post on the front page right now.

Comment by Andreas Gazis on October 27, 2018 at 12:17am

I am aware of the SLAM algorithms, not that you really need full blown SLAM if your only purpose is to avoid bumping into things but it helps.

aeiou's Dawn is, as far as I can tell, proprietary. To the best of my knowledge, none of these systems have been shown to work to the satisfaction of regulators. Not that the day can be far off, mind you (at least from a technical point of view), but as long as we don't get diffusion through open source, progress is not going to be as fast as it could be.


You need to be a member of DIY Drones to add comments!

Join DIY Drones


Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2018   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service