Sunil Shah's Posts (2)

Sort by

3689618160?profile=original

This is part two of a two part series where we expand on our Master's projects at UC Berkeley. Sponsored by 3D Robotics Research & Development, and working with the Cyber Physical Cloud Computing research group, we looked into possible commercial applications of drone technology and worked on a few sub-projects that we felt would help or enable companies developing products around drones. Part one, about "Real-time Image Processing" is here.

You just assembled your new multi-rotor and went flying. Out of nowhere, one of the propellor blades snaps in the air. Your UAS comes crashing down and hits somebody’s parked car.

 

The scenario presented is an example of how UAS crashes can become very expensive. You will have to purchase new parts to replace the broken components. Worse still, you may end with an open-ended amount of liability for causing personal or property damage. It is therefore important to detect structural problems on a unmanned vehicle before they cause a crash. In our project, we looked at the feasibility of implementing a vibration monitoring on a UAS to detect structural problems pre-flight.

 

A low cost warning system can be implemented by attaching accelerometers onto a multi-rotor frame. Before each take off, these accelerometers measure the vibration at various points on the vehicle and send it to an onboard computer. This computer then looks at the data and tries to predict impending failure.

 

In our project, we attached a single accelerometer (or Inertial Measurement Unit, IMU) to a UAS and measured the vibrations prior to taking off. We then introduced artificial structural problems. For example, we loosened the screws on certain joints and characterized the vibration recorded by the accelerometer at various motor speeds. We then compared the various cases. An accelerometer, the LIS344ALH from STMicroelectronics was used to measure the vibrations, and a MSP430F2618 microcontroller was used to process the data and carry out the Fourier Transform.

 

The set up is shown in the picture below for the normal case and for two induced structural problems.

 

3689618253?profile=original3689618186?profile=original

 

After spinning the motors and taking measurements, we applied a Fourier Transform to investigate how the vibrational energy was distributed in the frequency spectrum.

 

For example, the following figures show the same motor spinning at the same speed, with the motor slightly loosened for the plots on the right.

 

3689618206?profile=original3689618303?profile=original

 

From the plots, we demonstrated that it is possible to do spot structural problems just by watching the magnitude of vibrations. If the vibration magnitude is 10x of a pre-determined baseline, we could trigger a warning to the aircraft operator and warn of impending failure. More work is needed to exhaustively find out what kind of vibrational frequencies and energies correspond to what kind of failure mode, to speed up the repair process.

 

In a class project, we were able to use machine learning techniques to gain up to 98% classification accuracy for whether the aircraft is in a failure state or not, using the data gathered above over 350 one second samples.

 

If implemented industry-wide, this system could serve as an extra layer of protection for society and civilian UAS operators. It would also definitely help in promoting UASs as a safer and more reliable platform for leisure, transport and surveillance.

 

The full project report is here, while our report on classifying this data using machine learning techniques is here. Our machine learning code is available here. Credit goes to Chris Echanique for his contribution to the machine learning work.


FInally - thank you to Professor Raja Sengupta for the advice and Dr. Brandon Basso from 3D Robotics for advising us and keeping us well stocked with hardware!

Read more…

3689618057?profile=originalA team of four Master's students from the Electrical Engineering and Computer Science and the Industrial Engineering and Operations Research departments at UC Berkeley (Armanda Kouassi, Emmanuelle M'bolo, Sunil Shah and Yong Keong Yap) worked on a capstone project, Drones: The Killer App, sponsored by 3D Robotics Research & Development.

 The Master of Engineering program aims to create engineering leaders of the future by combining courses in leadership with a technical core. As part of this project, we looked into possible commercial applications of drone technology and worked on a few sub-projects that we felt would help or enable companies developing products around drones.

 

We worked with the Cyber Physical Cloud Computing research group, a group that looks at the applications of unmanned aerial vehicles and were advised jointly by Dr. Brandon Basso from 3D Robotics and Professor Raja Sengupta from UC Berkeley.

 

This is the first in a series of two posts where we'll go into some detail about our projects. In this post, we'll talk about our attempt to implement real-time image processing on low cost embedded computers such as the Beaglebone Black.

 

It's easily possible to process frames from a video capture device as fast as they come (typically 30 frames per second) - if you have a nice fast x86 processor. However, most systems which feature these processors are typically too large to fly on multi-rotors - either requiring significant additional weight in the form of cooling or requiring a significant amount of space due to their mainboard's footprint. In addition, the power draw of these boards can be as much as a single motor (or more). These factors make it impractical to deploy such boards on small UAS.

 

As processing power continues to miniaturise, more options become available, including the recently announced Intel Edison. However, given the high computational demands of unmanned systems, it is still important to architect and write efficient image-processing systems to leave room for additional intelligence.

 

As part of this project, we investigated the use of commodity ARM processor based boards to be used for on-board image processing. We implemented a landing algorithm described in the 2001 paper, "A Vision System for Landing an Unmanned Aerial Vehicle" by Sharp, Shakernia and Sastry. This allows very accurate pose estimation - whereas GPS typically gives you accuracy of a few metres, we saw position estimates that were accurate to within a few centimetres of our multi-rotor's actual position relative to a landing pattern.

 

We attempted to re-use popular open source robotics libraries for our implementation, to speed up development time and to avoid re-inventing the wheel. These included roscopter (which we forked to document and clean up), ROS, and OpenCV. Our approach makes use of a downward facing Logitech webcam and a landing pad with six squares, shown below.

3689618129?profile=original

 

The picture above shows our hardware "stack" on a 3DR quadcopter. Below the APM 2.6 is our computer (in this setup, a Beaglebone Black) plus a USB hub. To the left is a Logitech C920 webcam.

3689617989?profile=original 

This approach first labels each corner. Since we know the relative sizes of each square and we now know their coordinates in "camera" space, it becomes possible to apply a matrix to work out their position in real world space.

Videos of pose estimation working in the lab and from the air:

 

 

Computationally, the most intensive part of this process is in pre-processing the image, finding polygons and then identifying the squares within this set of polygons. We then label the polygons and their corners. This process is shown below:

3689617994?profile=original 

Using the BeagleBone Black, a single core ARM board with a 1 GHz Cortex A8 processor and 512 MB of RAM, we were unable to process frames at any more than 3.01 frames per second.

 

After spending a significant amount of time optimising OpenCV to use the optional ARM-specific SIMD NEON extensions, code-level parallelism through Intel's TBB library, and to use libjpeg-turbo, an optimised JPEG decoding library, we managed to get the average frame rate up to 3.20 frames per second.

It was clear that our approach needed to be re-visited. We therefore profiled our code to figure out where the majority of time went, generating a heat diagram (more time spent = darker):

3689618084?profile=original 

After revisiting our code, we removed the median blur (which turned out not to affect performance) and refactored some of our for loops to avoid unnecessary computations. This took our average frame rate to 5.08 frames per second. Considerably better but still not frequent enough for good real-time control.

We then moved to the more expensive and slightly larger Odroid XU, an ARM board with the Cortex A15 processor, featuring four 1.6 GHz cores and 2 GB of RAM. This immediately took us up to 21.58 frames per second. Partially due to the increased clock speed and partially due to being multi-core (less context switching between our code and operating system processes).

Finally, we implemented pipelining using Pthreads, dispatching each frame to a free worker thread, shown below.

3689618093?profile=original

When we ran this implementation using just two threads, we were able to get up to almost 30 frames per second, at a system load average of 1.10 - leaving plenty of headroom for other running processes. Unfortunately, we weren't able to get our controller working well enough to actually show the landing algorithm in action.

The full project report (including detailed performance figures) can be found here. Our landing code is open source - the pose estimation works perfectly (and quickly!) but our controller needs some work. Feel free to clone or fork it.

Credit is also due to collaborators Constantin Berzan and Nahush Bhanage.


In the next post, we'll talk about our prototype of a structural health monitoring system.

Read more…