obstacle avoidance using a convolution neural network

Recently I started working on convolutional neural networks (cnn) for obstacle avoidance. With the help of "DroNet" from ETH Zurich and the "Deep Learning for Computer Vision" book from Adrian Rosebrock, I managed to build my first cnn algorithm for obstacle avoidance. The cnn module is written in python, using a keras module with tensorflow backend. I included my cnn algorithm in a dronekit script. By sending mavlink distance messages to the flight controller in loiter or altitude hold flight mode, my drone is able to avoid obstacles. A brief description and demonstration of the developed cnn is given in the YouTube video. First results are promising.

Views: 1572

Comment by Andreas Gazis on March 12, 2018 at 10:05pm

This is seriously impressive, well done!

A few questions:

1 - The network is only fed individual still images, there is no comparison between previous and current image, correct?

2 - Is this running on the original Dronekit or DKPY 2.0?

3 - How long does it take between image capture and result being spat out of the net?

4 - Related to above, how fast could you fly and at what range could you detect? Do you think it would be suitable for fixed wings or is the RPi 3 not powerful enough?

5 - The RPi3 has a GPU, did you use it or are the calculations run on the CPU?

Massive respect :)

Comment by andre van calster on March 13, 2018 at 7:31am

Thx for your interest.

  1. I only use Open CV captured images from my PiCam video stream
  2. I use dronekit version 2.9.0
  3. I measured a cycle time (time between two consecutive images) of about 0.2sec
  4. Some more extensive testing will done during Eastern, when I am back in France, Charente Maritme, at Model Club de la Côte de Beauté. I have no license to fly my drone here in Belgium.
  5. The calculations are run on the CPU of the RPi3

MR60
Comment by Hugues on March 13, 2018 at 11:43am

Very nice but stop saying this is AI. This is not an artificial intelligence, but a pattern/image recognition or expert system.

Comment by Auturgy on March 13, 2018 at 7:18pm

Great work! Is your plan to publish the code?

Comment by Andreas Gazis on March 14, 2018 at 1:25am

@ Hugues. Well, by and large CNNs are bundled under deep learning, which is bundled under machine learning, which is bundled under AI. This type of diagram gets thrown around a lot:

We can argue definitions all we want but we don't have "real" AI in the sense of general intelligence nor are we anywhere near it.

Still, given the astonishingly slow progress of AI in general and computer vision in particular (nicely summed up by XKCD) for the last few decades, the current explosion of stuff that can be tackled with CNNs is breathtaking.

Comment by andre van calster on March 14, 2018 at 1:30am

@Auturgy I intend to publish my code at GitHub once I have cleaned up my code.

Comment by Patrick Poirier on March 14, 2018 at 5:51pm

Bonjour André,

Looking at the video , I see you are using dronet  https://github.com/uzh-rpg/rpg_public_dronet for training on a fly/stop scenario which is the best strategy for testing. The speed seems pretty good on the PC (I would say 30 HZ), but what about running the trained model on the RPI, what speed do you have for fly/stop ? 

I just love Adrian's pyimagesearch blogs, he started working with the Movidius on the RPI,  lets get our finger crossed that Intel continue support and add Keras to the toolkit.

Keep on the good job and I will certainly give it a try this summer

Comment by andre van calster on March 15, 2018 at 1:53am

@Patrick Poirier

Bonjour Patrick

I still have to perform additional testing to get all the characteristics. This will be done In France during Eastern holidays. On my RPi3 I measure a cycle time of 0.2 sec (5Hz).

Comment by Mike Isted on April 9, 2018 at 3:03pm

Hi Andre,

Excellent work, well done.

I am slowly heading in a similar direction and am migrating from RPi/Picam to Jetson TX2/Realsense.  I am also moving to Mavros.

As with you, I have learnt much from Adrian Rosebrock and his courses. Happy to swap ideas anytime...

Best regards,

Mike

Comment by andre van calster on April 11, 2018 at 9:21am

@ Mike Isted: Thx

@ Patrick  Poirier

I optimized my architecture and compared the average processing time Tav of one video frame on RPI3 with and without Intel's Movidius. The results obtained are: Tav (without Movidius) = 0.17 sec, Tav (with Movidius) = 0.038 sec.

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2018   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service