The aim of this project is to have a quadcopter follow a line based on the classification of images taken by a Pi Camera and run through a simple machine learning algorithm running on Raspberry Pi. I'm using an F450 quadcopter frame, Pixhawk running the latest ArduCopter firmware, Raspberry Pi 3 Model B and 8MP Pi Camera. The motivation for this project is a paper published by A Giusti and team in IEEE Robotics and Automation Letters, "A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots". The video of their project can be found here: https://www.youtube.com/watch?v=umRdt3zGgpU

How it works:

The quadcopter is mounted with only one Pi Camera connected to the Raspberry Pi pointed down at the track. The video feed from the Pi Camera will be converted to frames using opencv and the optimized images run through a supervised machine learning algorithm running on the RPi. The machine learning algorithm is a simple artificial neural network that classifies the images into one of 3 classes i.e. whether the track is in the center of the image, to the left, or to the right. The neural network has already been trained on a PC and the final weights obtained after training are what's used with the pixel values of the processed images on the RPi to predict what class the image is (left, center or right).

3 GPIO pins on the RPi are used to communicate where the track lies on the image. If the track is on the left of the image, the GPIO pin reserved for left class will have binary output 1, and the other two pins will read 0. If the track is at the center, the GPIO pin reserved for center class will read 1 and the other two will read 0.

Based on the binary values at the 3 GPIO pins, the throttle, pitch and yaw values are to be given using DroneKit and MAVLink. For example, if the quad has veered off to the left, the track pattern will be on the right of the image. The machine learning algorithm will classify it as class 'right', and GPIO pin for right class will read 1. If this pin reads 1, then a small fixed yaw value should be given to steer the copter back to the right.

I am doing this as my final year project. I'm new to MAVProxy and DroneKit, but have been able to set them up on my RPi and can send simple commands like arm/disarm using MAVProxy on the command line. However I need help understanding and using DroneKit to write this entire autonomous mission. I've had a look at other projects like the Red Balloon Finder and the code seems all too overwhelming for a newbie like myself. I know python but this is my first time using companion computers with Pixhawk. I'm currently reading through the DroneKit and MAVProxy documentation but it's a lot to grasp in a short time. I'd be really glad if any dronekit/ardupilot devs would be willing to help me with the coding.

I'm looking to get some help from anyone who's worked with DroneKit or MAVProxy. Any other feedback or alternative ideas on how I can do this mission are also welcome.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • 3D Robotics

    Basically, you just want your line follower code on the Pi to spit out angular offsets based on how off-center the line is, and PID those into left/right corrections in turn for the drone. Then send them to the Pixhawk using RC_Override commands with Dronekit. 

  • Wow that's neat. The OpenMV cam looks well suited for this project. Wish I'd heard of it earlier. I'll consider using it for one of my future projects.

    I've designed simple arduino-based and atmel-based line follower bots before that use infrared sensors. I thought I could apply this to drones but didn't realise how extensive MAVLink and DroneKit was and how complicated writing code for an autonomous mission using these is. Would really appreciate some help with writing DroneKit missions.

  • 3D Robotics

    This is super cool. We do much the same thing with the simple OpenMV cam in rovers. It's a very small and light single-board camera computer that has line following built in and speaks native MAVLink

    Small - Affordable - Expandable
    The OpenMV project is about creating low-cost, extensible, Python powered, machine vision modules and aims at becoming the “Arduino of Machine Vision…
This reply was deleted.

Activity

Neville Rodrigues liked Neville Rodrigues's profile
Jun 30
Santiago Perez liked Santiago Perez's profile
Jun 21
More…