Recently I started working on convolutional neural networks (cnn) for obstacle avoidance. With the help of "DroNet" from ETH Zurich and the "Deep Learning for Computer Vision" book from Adrian Rosebrock, I managed to build my first cnn algorithm for obstacle avoidance. The cnn module is written in python, using a keras module with tensorflow backend. I included my cnn algorithm in a dronekit script. By sending mavlink distance messages to the flight controller in loiter or altitude hold flight mode, my drone is able to avoid obstacles. A brief description and demonstration of the developed cnn is given in the YouTube video. First results are promising.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @ Mike, Thx -  looking forward to your results

  • Great work.  I am working towards this as well using ROS and adding a depth camera.  Many thanks for sharing.

  • my code is available on GitHub avncalst/drone_cnn

  • During training of my CNN I focused on avoiding persons. Static tests at the MCCB club (Charente Maritime – France) showed that my CNN-drone decided to stop at 5m. Dynamic tests in Loiter mode flying at 1.7-2.3 m/sec confirmed that my drone halted at 4-5 m (see included picture). My CNN classifies the real-time video frames into flying (p=0) and stopping (p=1). This classification is converted into distances (cm) by using a first order low pass filter: dist(n) = (1-alfa)*dist(n-1) +alfa*((1-p)*340+180) [alfa=0.2]. The mavlink distance messages are sent to the flight controller using dronekit, enabling object avoidance in loiter, guided and altitude hold mode.

    3702423679?profile=original

  • Bonjour André,

    That is quite good !! 

    It makes the concept a real time avoidance system, even without the Movidius.

    The gain with Movidius  (5X) is less than what Intel's Marketing  is pretending with... ''one order of magnitude'' .. but its more in the range that what I experimented so far.

    Keep on the good work

  • Wow, this sounds seriously cool. I would love to see how aggressive flying it can handle.

  • @ Mike Isted: Thx

    @ Patrick  Poirier

    I optimized my architecture and compared the average processing time Tav of one video frame on RPI3 with and without Intel's Movidius. The results obtained are: Tav (without Movidius) = 0.17 sec, Tav (with Movidius) = 0.038 sec.

  • Hi Andre,

    Excellent work, well done.

    I am slowly heading in a similar direction and am migrating from RPi/Picam to Jetson TX2/Realsense.  I am also moving to Mavros.

    As with you, I have learnt much from Adrian Rosebrock and his courses. Happy to swap ideas anytime...

    Best regards,

    Mike

  • @Patrick Poirier

    Bonjour Patrick

    I still have to perform additional testing to get all the characteristics. This will be done In France during Eastern holidays. On my RPi3 I measure a cycle time of 0.2 sec (5Hz).

  • Bonjour André,

    Looking at the video , I see you are using dronet  https://github.com/uzh-rpg/rpg_public_dronet for training on a fly/stop scenario which is the best strategy for testing. The speed seems pretty good on the PC (I would say 30 HZ), but what about running the trained model on the RPI, what speed do you have for fly/stop ? 

    I just love Adrian's pyimagesearch blogs, he started working with the Movidius on the RPI,  lets get our finger crossed that Intel continue support and add Keras to the toolkit.

    Keep on the good job and I will certainly give it a try this summer

    uzh-rpg/rpg_public_dronet
    Code for the paper Dronet: Learning to Fly by Driving - uzh-rpg/rpg_public_dronet
This reply was deleted.