3689711559?profile=original

In this short blog series I’m outlining the hardware and software of The Groundhog, my entry into the recent MAAXX-Europe autonomous drone competition held at the University of the West of England, Bristol.

In this post I shall overview the approach taken to the image recognition system used to track the line being followed around the track.  Remember the line is red, about 50mm across and forms an oval track 20m by 6m.  We are attempting to race around as fast as we can, avoiding other UAVs if necessary.

Please note this blog is not a line by line treatment of the code, indeed the code provided is neither tidied or prepared as if for ‘instruction’.  In fact it is the product of much trial and change over the competition weekend!  Nevertheless, I hope it provides some useful pointers for the key ideas developed.

Approach

Image recognition is undertaken by the Raspberry Pi 3 companion computer using Python 2.7 and OpenCV 3.0.   I am indebted to Adrian Rosebrock of pyimagesearch for his many excellent blog posts on setting up and using OpenCV.

So we are trying to follow a red line which forms an oval track about 20m x 6m.  The line is about 50mm across.  However, we have limited processing power with the on-board Raspberry Pi, so we need to minimise the load.  Here’s the approach:

When following the line:

  1. Present a steady and level image by use of a roll/pitch servo gimbal.  This is driven directly by the Pixhawk.
  2. Reduce the image resolution to 320 x 240 pixels to reduce computational load.
  3. Convert the image to a top-down view using inverse perspective mapping.  This will help us accurately measure the angle of the line ahead.
  4. Blur the image and filter for the desired colour (red) using a mask.  This leaves us with the line in white against a black background.
  5. Apply a ‘band’ region of interest across the top of the image.  Use contours to identify where the line being followed intersects this band, giving us one coordinate.
  6. Do the same along the bottom of the image, giving us another coordinate.
  7. Use the coordinates to calculate and return the bearing of the line and intercept on the x-axis.
  8. The bearing is the signal for yaw, the intercept is the signal for roll.

If the line is lost:

  1. Return the bearing of any part of the line that can be located.

Additional features:

  1. Maintain a 5 frame moving average for the bearing and offset to reduce noise.
  2. Maintain a ‘confidence’ level to assess the quality of the lock and this a means to establish if it has been lost.
  3. Rather than be fixed, the two region of interest bands range from the bottom and top of the image until they find the line.  This gives the best chance of locating the line in sub-optimum conditions, whilst still a valid giving bearing and offset.
  4. The image capture and processing code is implemented as a class running in a separate thread on the RPi.  This permits much more efficient use of the RPi and again credit is given directly to Adrian Rosebrock at pyimagesearch.
  5. When following, the width of the image is variable, widening as the bearing increases to try to keep the line in view.

Competition Evaluation

The approach worked well during the competition.  In fact, it was clear that the ability of the Groundhog to detect and measure the line exceeded most other competitors.  Other notable points were:

  • The returned bearing and offsets seemed sensible;
  • The lock was maintained reliably whenever the line was in view, even if only partially.
  • Performance ranged continuously from around 13fps (widest image) to 19fps (narrowest view).

However:

  • The 5 point moving average for the bearing and offset seemed to produce noticeable lag.  As the lock seemed very reliable without it, the moving average was removed later in the competition, which seemed to improve the response times.
  • The optimum camera angle was difficult to achieve.  Furthermore, moving the camera angle changes it’s perspective and hence the homography used for the inverse perspective mapping exercise.  Calculation of the homography is fixed in the code and so does not take account of this change, thus creating error in calculated angles.  Ideally the homography would be calculated from the camera angle dynamically.
  • Repeated application of the homography to warp the image caused frequent segmentation faults.  This remained a problem throughout the competition and I suspect was due to an imperfect compilation of OpenCV.

Image Sequence

3689712491?profile=original

The unedited image of a test line on my desk. Obviously the parallel lines run towards a vanishing point.

3689712641?profile=originalThis view is after the image has been warped to get the ‘top-down’ view.  It also shows the upper and lower regions of interest, with a successful lock of the line in each.

Image Processing Code

Please see my website post here.

The code is also to be placed shortly on GitHub and I'll edit this post as soon as that happens.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Hi Chris,

    I thought I'd looked at everything line following on YouTube, but this came on later than the competition.  It looks awesome!  Far more sophisticated than my efforts!

    I'll take a closer look and many thanks.  Hopefully I'll be able to contribute something back soon

    Mike

  • 3D Robotics

    Great post. This is almost exactly what we do with our ground rovers, using the same basic tools (OpenCV and RasPi). You can see some samples here and here

    You might also want to check out the OpenMV cam, which does the same thing in an even smaller, cheaper package.  Here's a demo of it following a blue line, but it can do so with any color/width. 

    Simple RaspberryPi-based Autonomous Car
This reply was deleted.