3D Robotics

Training autonomous cars with Grand Theft Auto

This is super cool -- training with scaleable simulations is the future of robotic AI.  From Hackaday:

For all the complexity involved in driving, it becomes second nature to respond to pedestrians, environmental conditions, even the basic rules of the road. When it comes to AI, teaching machine learning algorithms how to drive in a virtual world makes sense when the real one is packed full of squishy humans and other potential catastrophes. So, why not use the wildly successful virtual world of Grand Theft Auto V to teach machine learning programs to operate a vehicle?

Half and Half GTAV Annotation Thumb

The hard problem with this approach is getting a large enough sample for the machine learning to be viable. The idea is this: the virtual world provides a far more efficient solution to supplying enough data to these programs compared to the time-consuming task of annotating object data from real-world images. In addition to scaling up the amount of data, researchers can manipulate weather, traffic, pedestrians and more to create complex conditions with which to train AI.

It’s pretty easy to teach the “rules of the road” — we do with 16-year-olds all the time. But those earliest drivers have already spent a lifetime observing the real world and watching parents drive. The virtual world inside GTA V is fantastically realistic. Humans are great pattern recognizers and fickle gamers would cry foul at anything that doesn’t analog real life. What we’re left with is a near-perfect source of test cases for machine learning to be applied to the hard part of self-drive: understanding the vastly variable world every vehicle encounters.

A team of researchers from Intel Labs and Darmstadt University in Germany created a program that automatically indexes the virtual world (as seen above), creating useful data for a machine learning program to consume. This isn’t a complete substitute for real-world experience mind you, but the freedom to make a few mistakes before putting an AI behind the wheel of a vehicle has the potential to speed up development of autonomous vehicles. Read the paper the team published Playing for Data: Ground Truth from Video Games.

Before you think this could go horribly wrong, check out this mini-rally truck that taught itself how to powerslide and realize that this method of teaching AI how to drive could actually be totally awesome. Also realize that this research is just characterizing still-images at about 7 seconds per image. This is more than a couple orders of magnitude faster than real-world images — great for learning but we’re still very far away from real-time and real-world implementation.

We really hope that a team of research assistants were paid to play a lot of GTA V in a serious scientific effort to harvest this data set. That brings to mind one serious speed-bump. This game is copyrighted and you can’t just do anything you want with recordings of gameplay, and the researchers do mention that gameplay footage is allowed under certain non-commercial circumstances. That means that Uber, Google, Apple, Tesla, every major auto company, and anyone else developing autonomous vehicles as a business model will be locked out of this data source unless Rockstar Games comes up with a licensing model. Maybe your next car will have a “Powered by GTA” sticker on it.

[MIT Technology Review via Popular Science]

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • My contribution to IT science is my 40+ years ago developed single-pass sorting algorithm for large sets of natural numbers,  run Bs times every hour globewide.

    BTW

    Autonomous cars should not to allowed to populate public roads since there is legal conflict of interest between
    $100T autonomous car and $10T manned car in case of crash.

    Autonomous cars may be tought to crash manned cars to win car accident compensation

  • Patrick, thanks for the link, very interesting

  • Absolutely Paul,

    You can see here a similar setup by Neurala, using MS Fly Simulator.

    The difficulty today is getting a rich dataset that has the complimentary semantic to get an efficient training.

    In the Grand Thieft auto above, they show exactly how to make these steps, so until we simulator with pixel-level semantic segmentation ground truth, we have to manually add these to feed the Neural Network. 

  • So could we train UAV using X-Plane?

This reply was deleted.