Chris Anderson's Posts (2718)

Sort by
3D Robotics

New ROS2 autopilot code for Dronecode/PX4

Great update from Victor Mayoral on ROS 2.0 progress and the further integration of the Dronecode/PX4 stack and ROS

ROS 2.0 native drone flight stack prototype

The video above provides a peek into a working prototype of the concept of a software autopilot for drones that speaks ROS 2.0 natively, that is, a modified version of the PX4 flight stack where all the barometer readings fetched from the corresponding sensor are published using ROS 2.0 and used by other modules through the subscription primitives of ROS 2.0.

This work aims to explore the potential of autopilots that interoperate natively with ROS and are able to extend their existing functionalities (attitude and altitude stabilization, basic flight modes, mission planning, etc.) with higher level behaviors common in robotics (obstacle avoidance, SLAM, fully autonomous navigation, etc.).

Code and instructions are available here.

Read more…
3D Robotics

Big new APM:Rover release

3689707125?profile=original Great news from the APM:Rover team. That's the code we use for our autonomous go-kart (above)

APM:Rover Release 3.1.0, 22 December 2016

The ArduPilot development team is proud to announce the release of version 3.1.0 of APM:Rover. This is a major release with a lot of changes so please read the notes carefully!

A huge thanks to ALL the ArduPilot developers. The Rover code benefits tremendously from all the hard work that goes into the Copter and Plane vehicle code. Most of the code changes in this release were not specifically for Rover however because of the fantastic architecture of the ArduPilot code Rover automatically gets those enhancements anyway.

Note that the documentation hasn't yet caught up with all the changes in this release. We are still working on that, but meanwhile if you see a feature that interests you and it isn't documented yet then please ask.

The Release Photo Above:
This is a 300 m wide lake. 
The water temperature was 55 C
The pH was 0.5 - the same as battery acid
There were a lot of phreatic explosions in the middle of the lake. 
The yellow rafters are sulfur. 
Gasmasks were necessary because of the very high SO2 concentration. 
It was also very hot.
The boat is designed for mapping an acid lake. That's why no prop in the water and nearly no metal parts.
And the AutoPilot - ArduPilot of course - no other AutoPilot will do.
http://discuss.ardupilot.org/t/sonar-air-boat-working-fine/9912

Release Information

The PX4-v2 build has had CANBUS support removed due to firmware size issues. If Rover users want CANBUS support you will need to install the PX4-v3 build located in the firmware folder here:
http://firmware.ap.ardupilot.org/Rover/stable/PX4/

EKF1 has been removed as EKF2 has been the long term default and is working extremely well and this has allowed room for EKF3.

EKF3 is included in this release but it is not the default. Should you want to experiment with it set the following parameters:
AHRS_EKF_TYPE=3
EK3_ENABLE=1
but note it is still experimental and you must fully understand the implications.

New GUIDED Command

Rover now accepts a new message MAV_CMD_NAV_SET_YAW_SPEED which has an angle in centidegrees and a speed scale and the rover will drive based on these inputs.

The ArduPilot team would like to thank EnRoute for the sponsoring of this feature
http://enroute.co.jp/

COMMAND_INT and ROI Commands

COMMAND_INT support has been added to Rover. This has allowed the implementation of SET_POSITION_TARGET_GLOBAL_INT, 
SET_POSITION_TARGET_LOCAL_NED and DO_SET_ROI as a COMMAND_INT

The ArduPilot team would like to thank EnRoute for the sponsoring of this feature
http://enroute.co.jp/

Reverse

Its now possible in a mission to tell the rover to drive in Reverse. If using Mission Planner insert a new Waypoint using "Add Below" button on the Flight Plan screen and select from the Command drop down list you will see a new command "DO_SET_REVERSE". It takes 1 parameter - 0 for forward, 1 for reverse. It's that simple. Please give it a try and report back any success or issues found or any questions as well.

The ArduPilot team would like to thank the Institute for Intelligent Systems Research at Deakin University
(http://www.deakin.edu.au/research/iisri) for the sponsoring of the reverse functionality.

Loiter

This changes brings the LOITER commands in line with other ArduPilot vehicles so both NAV_LOITER_UNLIM and NAV_LOITER_TIME are supported and are actively loitering. This means for instance if you have set a boat to loiter at a particular position and the water current pushes the boat off that position once the boat has drifted further then the WP_RADIUS parameter distance setting from the position the motor(s) will be engaged and the boat will return to the loiter position.

The ArduPilot team would like to thanko MarineTech for sponsoring this enhancement.
http://www.marinetech.fr1

Note: if you currently use Param1 of a NAV_WAYPOINT to loiter at a waypoint this functionality has not changed and is NOT actively loitering.

Crash Check

Rover can now detect a crash in most circumstances - thanks Pierre Kancir. It is enabled by default and will change the vehicle into HOLD mode if a crash is detected. FS_CRASH_CHECK is the parameter used to control what action to take on a crash detection and it supports 0:Disabled, 1:HOLD, 2:HoldAndDisarm

Pixhawk2 heated IMU support

This release adds support for the IMU heater in the Pixhawk2, allowing for more stable IMU temperatures. The Pixhawk2 is automatically detected and the heater enabled at boot, with the target IMU temperature controllable via BRD_IMU_TARGTEMP. Using an IMU heater should improve IMU stability in environments with significant temperature changes.

PH2SLIM Support

This release adds support for the PH2SLIM variant of the Pixhawk2, which is a Pixhawk2 cube without the isolated sensor top board. This makes for a very compact autopilot for small aircraft. To enable PH2SLIM support set the BRD_TYPE parameter to 6 using a GCS connected on USB.

AP_Module Support

This is the first release of ArduPilot with loadable module support for Linux based boards. The AP_Module system allows for externally compiled modules to access sensor data from ArduPilot controlled sensors. The initial AP_Module support is aimed at vendors integrating high-rate digital image stabilisation using IMU data, but it is expected this will be expanded to other use cases in future releases.

Major VRBrain Support Update

This release includes a major merge of support for the VRBrain family of autopilots. Many thanks to the great work by Luke Mike in putting together this merge!

Much Faster Boot Time

Boot times on Pixhawk are now much faster due to a restructuring of the driver startup code, with slow starting drivers not started unless they are enabled with the appropriate parameters. The restructuring also allows for support of a wide variety of board types, including the PH2SLIM above.

This release includes many other updates right across the flight stack, including several new features. Some of the changes include:

  • log all rally points on startup
  • the armed state is now logged
  • support added for MAV_CMD_ACCELCAL_VEHICLE_POS
  • support MAVLink based external GPS device
  • support LED_CONTROL MAVLink message
  • support PLAY_TUNE MAVLink message
  • added AP_Button support for remote button input reporting
  • support 16 channel SERVO_OUTPUT_RAW in MAVLink2
  • added MAVLink reporting of logging subsystem health
  • added BRD_SAFETY_MASK to allow for channel movement for selected channels with safety on
  • lots of HAL_Linux improvements to bus and thread handling
  • added IMU heater support on Pixhawk2
  • allow for faster accel bias learning in EKF2
  • added AP_Module support for loadable modules
  • merged support for wide range of VRBrain boards
  • added support for PH2SLIM and PHMINI boards with BRD_TYPE
  • greatly reduced boot time on Pixhawk and similar boards
  • fixed magic check for signing key in MAVLink2
  • fixed averaging of gyros for EKF2 gyro bias estimate
  • added support for ParametersG2
  • support added for the GPS_INPUT mavlink message
Read more…
3D Robotics

3689706926?profile=original90,000 lines of code in one month! From the PX4 blog:

We have upgraded today to NuttX 7.18 on PX4/master. This is a major upgrade from NuttX 6, which we had been using for several years almost unchanged. PX4 did grow exponentially in 2016, adding significant resources to our ability to drive the software development and flight testing. This allowed to take on the monumental task of re-validating the operating system. NuttX has evolved significantly in the last years, offering fixes, new functionality and support for new hardware like the STM32F7.

We always saw the technical strengths of NuttX and its interesting to note that a major company like Sony adopts it for products and talks about it at ARM Tech con. Every Pixhawk is flying the PX4 middleware and NuttX and so it might still be amongst the most widely adopted products running it.

The team has been very active the past month, leading to a record contribution level. Although note that this is a very incomplete way to look at PX4 - you also need to factor in PX4/ecl, PX4/matrix, PX4/DriverFramework, etc, to get the full picture. This one repository is just the "glue" that brings it all together.

Cortex M7 vs. Parrot Bebop 2.

It is interesting to note that the CPU load running flight control on the STM32F7 on a single core is comparable to the CPU load  on a much faster single core of a Parrot Bebop 2:

Below is the load on Bebop (percentage in the list is a single core)

  • Mem: 66516K used, 230236K free, 0K shrd, 15268K buff, 22992K cached
  • CPU: 10.3% usr 21.7% sys 0.0% nic 67.4% idle 0.0% io 0.0% irq 0.5% sirq
  • Load average: 1.53 0.80 0.32 3/113 1221 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 1200 1187 root S 36468 12.2 0 26.1 px4 /home/root/ /home/root/px4.config

And here is the load on a Cortex M7 (STM32F7, complete system load)

  • Processes: 21 total, 4 running, 17 sleeping
  • CPU usage: 31.33% tasks, 1.39% sched, 67.28% idle
  • DMA Memory: 5120 total, 1536 used 1536 peak
  • Uptime: 91.005s total, 61.744s idle

The similar number despite of the advantage of the Parrot Bebop 2 in clock speed makes sense: Flight control consists of many short interleaving tasks and sensor driver access for which Linux is not designed specifically, while the NuttX RTOS is. It also sheds a light on why prosumer drone manufacturers like DJI and Yuneec as well as Intel have retained an independent microcontroller for flight control in their models. Qualcomm is offering a smart hybrid approach by running PX4 on the DSP of the Snapdragon SoC, leaving Linux to run computer vision.

Read more…
3D Robotics

Scott Brown, Physicist and Engineer, incorporates a hemispheric radiometer inside an Unmanned Aerial Vehicle. It took almost three years for these instruments to be customized so they could fit and function properly inside the aircrafts.Love these updates from the crack Schmidt team

“It took half an hour to drill those three holes” sighs Scott Brown, Physicist and Electronic Engineer, looking at the tiny penetrations on top of the UAV. Fiber optic cables now poke out, ready to measure sunlight and send the information to the payload (scientific package) inside the aircraft to be recorded.

It has been almost three years since the process to design the instruments began. It was one process making sensors, software, cameras and GPS work, and another entirely to make them fit into the limited space provided by the aircraft. Brown’s team had to customize every component, and today the effort is paying off. “The instruments behaved how they were supposed to,” says Chris Zappa, Oceanographer and head of Brown’s engineering team. “We have gathered some very interesting data in high resolution, and now we can shift the focus back to science.”

Getting the instruments to fit, working inside the aircraft was not the only aspect that required great amounts of time and patience. Organizing an interdisciplinary research cruise such as this one was no easy feat, but its purpose is nearly complete and tomorrow will bring the final station.

Scott Brown, Physicist and Engineer, incorporates a hemispheric radiometer inside an Unmanned Aerial Vehicle. It took almost three years for these instruments to be customized so they could fit and function properly inside the aircrafts.SOI/ Mónika Naranjo González

Practical and Holistic

Each team will now bring the data and samples collected back to their home labs, then begin the long job of processing them. They will then crosscheck their findings with those of the other working groups, when the true value of this collaborative effort will become clear.

Ground truth measurements for ocean color satellites is one interesting application. While the UAV flies over the ocean, away from the effects of ships or clouds, cameras will be recording the different colors of the water in high wavelength resolution. Regular cameras record images in three wavelengths: red, green and blue. Zappa’s instruments are capable of continuously recording 322 different wavelengths.

Hyper spectral cameras scan the ocean's surface and record 322 colour wavelengths that can linked to different biochemical processes in seawater.Hyper spectral cameras scan the ocean’s surface and record 322 color wavelengths that can link to different biochemical processes in seawater.SOI/ Chris Zappa
Scott Brown, Physicist and Engineer, stands next to one of the HQ-60B drones to scan the sea surface. The drone had just completed another successful mission and was being disarmed on Falkor's flight deck.Scott Brown, Physicist and Engineer, stands next to one of the HQ-60B drones to scan the sea surface. The drone had just completed another successful mission and was being disarmed on Falkor’s flight deck.SOI/ Mónika Naranjo González

 

By correlating the colors picked up by the cameras with the chemical and biological composition of the samples processed by other working groups, this team will be able to link oceanic processes with color spectrum of seawater. This information will be used to calibrate satellite observations of ocean color and link them to the biochemical processes determined by the other scientific working groups onboard.

Looking Forward

Today, Brown’s instruments have improved enormously since the first time he tested them. They can take part in longer flights, be deployed more often, the data downloaded faster, and the instruments inside the UAV can be swapped out quickly. They also require less power from the aircraft, and are thoroughly balanced and weighted so they don’t affect the flight characteristics in any way. He is confident the payload will gather trustworthy data. His mission has been a success, as has been the mission of every scientist onboard.

Now the next stage of the process begins: understanding the stories that every sample can tell, in order to integrate them in the computer models and other forms of analysis that will lead to a better understanding of biochemical processes of the oceans.

An HQ-60B drone flies over the Pacific Ocean, away from the effects that Falkor or clouds could have over data being collected. The aircraft flies an average of two hours in each mission carrying a payload of radiation, hyperspectral or infrared cameras.An HQ-60B drone flies over the Pacific Ocean, away from the effects that Falkor or clouds could have over data being collected. The aircraft flies an average of two hours in each mission carrying a payload of radiation, hyperspectral or infrared cameras.SOI/ Josef Wischler

Read more…
3D Robotics

New store for ROS-compatible hardware

3689706818?profile=originalFrom ROS.org

I'd like to announce a new online store for robots, sensors and components supported by ROS: https://www.roscomponents.com

Why ROS-Components?

In recent years, ROS has become the standard in Service and Research Robotics, and it's making great advances in the Industry.

Most of the robots and components in the market support ROS, though sometimes finding which are really supported, what ROS version they make use, and how to get them is a difficult task. One of our main purposes is to make it easier and simpler for the customer, linking the products with their ROS controllers, explaining how to install and configure them and showing where to find useful information about them.

All the products in the site are supported by ROS, either available directly in the ROS distribution or through their source code. The ROS community has a new meeting point in ROS Components!

ROS as standard

From ROS-Components we strongly believe that ROS is and will be the standard in Robotics for many more years. Therefore we want to encourage roboticists to use it (whether you are not already doing so) as well as manufacturers to give support to it.

Supporting ROS and its Community

As you know, the ROS core is currently being maintained by the Open Source Robotics Foundation (OSRF), which is an independent non-profit R&D company leading the development, maintenance of ROS versions and hosting all the necessary infrastructure.

From ROS Components we try to encourage the use of ROS as well as its maintenance and growth. Therefore we are going to donate part of the benefits of every sale to the OSRF. So, every time you buy in ROS Components, you'll be contributing to the ROS maintenance and development.

Read more…
3D Robotics

DIY Drones at 82,000 members

3689706682?profile=original

It's customary and traditional that we celebrate the addition of every 1,000 new members here and share the traffic stats. We've now passed 82,000 members!

Thanks as always to all the community members who make this growth possible, and especially to the administrators and moderators who approve new members, blog posts and otherwise respond to questions and keep the website running smoothly.

Read more…
3D Robotics

Measuring airspeed without an airspeed sensor

3689706664?profile=original

Interesting post from the FlightGear blog via Curtis Olson:

Motivation

Air France #447Air France #447

On June 1, 2009 Air France flight #447 disappeared over the Atlantic Ocean.  The subsequent investigation concluded “that the aircraft crashed after temporary inconsistencies between the airspeedmeasurements – likely due to the aircraft’s pitot tubes being obstructed by ice crystals – caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall from which it did not recover.”  https://en.wikipedia.org/wiki/Air_France_Flight_447

This incident along with a wide variety of in-flight pitot tube problems across the aviation world have led the industry to be interested in so called “synthetic airspeed” sensors.  In other words, is it possible to estimate the aircraft’s airspeed by using a combination of other sensors and techniques when we are unable to directly measure airspeed with a pitot tube?

Basic Aerodynamics

Next, I need to say a quick word about basic aerodynamics with much hand waving.  If we fix the elevator position of an aircraft, fix the throttle position, and hold a level (or fixed) bank angle, the aircraft will typically respond with a slowly damped phugoid and eventually settle out at some ‘trimmed’ airspeed.

Phugoid8.png

If the current airspeed is faster than the trimmed airspeed, the aircraft will have a positive pitch up rate which will lead to a reduction in airspeed.  If the airspeed is slower than the trimmed airspeed, the aircraft will have a negative pitch rate which will lead to an acceleration.  https://en.wikipedia.org/wiki/Phugoid

The important point is that these variables are somehow all interrelated.  If you hold everything else fixed, there is a distinct relationship between airspeed and pitch rate, but this relationship is highly dependent on the current position of elevator, possibly throttle, and bank angle.

Measurement Variables and Sensors

In a small UAS under normal operating conditions, we can measure a variety of variables with fairly good accuracy.  The variables that I wish to consider for this synthetic airspeed experiment are: bank angle, throttle position, elevator position, pitch rate, and indicated airspeed.

We can conduct a flight and record a time history of all these variables.  We presume that they have some fixed relationship based on the physics and flight qualities of the specific aircraft in it’s current configuration.

It would be possible to imagine some well crafted physics based equation that expressed the true relationship between these variables …. but this is a quick afternoon hack and that would require too much time and too much thinking!

Radial Basis Functions

Enter radial basis functions.  You can read all about them here:  https://en.wikipedia.org/wiki/Radial_basis_function

From a practical perspective, I don’t really need to understand how radial basis functions work.  I can simply write a python script that imports the scipy.interpolate.Rbf module and just use it like a black box.  After that, I might be tempted to write a blog post, reference radial basis functions, link to wikipedia, and try to sound really smart!

Training the Interpolater

Step one is to dump the time history of these 5 selected variables into the Rbf module so it can do it’s magic.  There is a slight catch, however.  Internally the rbf module creates an x by x matrix where x is the number of samples you provide.   With just a few minutes of data you can quickly blow up all the memory on your PC.  As a work around I split the entire range of all the variables into bins of size n.  In this case I have 4 independent variables (bank angle, throttle position, elevator position, and pitch rate) which leads to an n x n x n x n matrix.  For dimensions in the range of 10-25 this is quite manageable.

Each element of the 4 dimensional matrix becomes a bin that holds he average airspeed for all the measurements that fall within that bin.  This matrix is sparse, so I can extract just the non-zero bins (where we have measurement data) and pass that to the Rbf module.  This accomplishes two nice results: (1) reduces the memory requirements to something that is manageable, and (2) averages out the individual noisy airspeed measurements.

Testing the Interpolater

Now comes the big moment!  In flight we can still sense bank angle, throttle position, elevator position, and pitch rate.  Can we feed these into the Rbf interpolater and get back out an accurate estimate of the airspeed?

Here is an example of one flight that shows this technique actually can produce some sensible results.  Would this be close enough (with some smoothing) to safely navigate an aircraft through the sky in the event of a pitot tube failure?  Could this be used to detect pitot tube failures?  Would this allow the pitot tube to be completely removed (after the interpolater is trained of course)?

Synthetic airspeed estimate versus measured airspeed. Zoom on one portion of the flight. synth_asi3-1024x524.jpgZoom in on another portion of the flight.

Source Code

The source code for this experimental afternoon hack can be found here (along with quite a bit of companion code to estimate aircraft attitude and winds via a variety of algorithms.)

https://github.com/AuraUAS/navigation/blob/master/scripts/synth_asi.py

Conclusions

This is the results of a quick afternoon experiment.  Hopefully I have showed that creating a useful synthetic airspeed sensor is possible.  There are many other (probably better) ways a synthetic air speed sensor could be derived and implemented.  Are there other important flight variables that should be considered?  How would you create an equation that models the physical relationship between these sensor variables?  What are your thoughts?

Read more…
3D Robotics

WebODM, a easy workflow app for Open Drone Map

3689706187?profile=original

From Hackaday:

WebODM allows you to create georeferenced maps, point clouds and textured 3D models from your drone footage. The software is really an integration and workflow manager for Open Drone Map, which does most of the heavy lifting.

Getting started with WebODM or Open Drone Map is simple since they provide preconfigured Docker images. You don’t have to worry about assembling a bunch of dependencies to make everything work. There are other mapping applications in use, too. You can see a comparison of five popular choices in the video below. WebODM isn’t complete yet, but they intend to include mission planning and integration with mobile apps.

Read more…
3D Robotics

New RTF VTOL plane from Horizon

Out now for $250. Anybody have any experience if this can be converted into autonomy with a Pixhawk or similar?

Overview

VTOL (Vertical Takeoff and Landing) RC models are usually a mixed bag when it comes to performance. If they are stable, their speed and agility is often lackluster. If they are nimble, pilots have to work harder to transition between hovering and airplane flight. The Convergence™ VTOL park flyer changes all that. Its unique design and exclusive flight control software give you the best of both agility and stability while making the transition between multirotor and airplane flight so smooth and predictable, you will feel right at home your first time on the sticks.

Sleek and Simple

Unlike complex VTOL aircraft that rotate the entire wing and require as many as four motors to achieve vertical and forward flight, the Convergence™ park flyer uses a simple, yet sleek, delta-wing design with three brushless motors - two rotating motors on the wing and a fixed-position motor in the tail.

In multirotor flight the wing-mounted motors rotate up into the vertical position to provide lift and flight control along with the motor in the tail. In airplane flight, the wing-mounted motors rotate down into the horizontal position and the model's elevons take over pitch and bank control. Yaw control in airplane flight is provided by differential thrust from the wing-mounted motors.

Exclusive Flight Control Software Makes it Easy

At the heart of it all is flight control software that has been expertly tuned by designer, Mike McConville, so almost any RC pilot can experience the fun of VTOL flight.

Automated Transition

Making the transition between multirotor and airplane flight is as simple as flipping a switch. The flight controller will smoothly rotate the two wing-mounted motors into the correct flight attitude and activate the rear motor as needed.

Stability and Acro Flight Modes

These two flight modes give you a wide range of performance for every phase of flight.

  • Stability Mode
    • In multirotor flight, Stability Mode will limit pitch and bank angles and work to keep the model level when you release the sticks. This allows you to take off and land like a pro, even if you've never flown a multirotor before. In airplane flight, it will limit pitch and bank angles and automatically return the wings to level when the sticks are released.
    • Stability Mode automatically engages during the transition between multirotor and airplane flight. It seamlessly maintains self-leveling and angle limits from one phase of flight to the other, making this the easiest RC VTOL experience you will find anywhere.
  • Acro Mode
    • In Acro Mode there are no angle limits or self-leveling in any phase of flight. During multirotor flight, the model will behave like a conventional multirotor that pitches and banks in whatever direction you want it to fly. It can flip and roll like other multirotors, too.
    • In airplane flight, Acro Mode lets you perform a wide range of aerobatic maneuvers. And because you have the forward thrust of two brushless motors working for you, there is plenty of speed and power to spare. You can even use the differential thrust of the motors to perform unique spinning and tumbling maneuvers.
Read more…
3D Robotics

From The Verge:

Audi, like pretty much every other automaker under the sun, continues to introduce more and more automated driving features for its cars. The next step, says the company, will be machine learning-powered driver assist features that work in constrained situations like driving in traffic and parking. They’ll be available in the new generation of Audi A8 saloons in 2017, but for the moment the German company is testing these features in a much more compact model — specifically, a model car 1/8th the size of the real thing.

You can see this pint-sized vehicle in action in the video above. Using deep reinforcement learning (a type of machine learning that’s essentially trial and error with "rewards" for the computer when it gets something right), Audi’s technicians have enabled it to autonomously search and park in a 3 x 3 meter arena. "An algorithm autonomously identifies the successful actions, thus continually refining the parking strategy," writes Audi in a blog post. "So in the end the system is able to solve even difficult problems autonomously."

Audi isn’t the only player in the auto industry to use machine learning to develop autonomous driving (startup Drive.ai is doing something similar) but it is one of the biggest, and technology it introduces will affect more drivers. Just like Audi’s miniature training car, pretty soon technology will make us all model drivers.

Read more…
3D Robotics

3D printed planes finally get practical

If Gary is convinced, so am I:

A while ago I thought that small 3D printed drones would not really become a thing. I based this on weight and build size. I could not see how it could be made to work.

I was utterly wrong.

Stepan Dokoupil and Patrik Svida, of 3D Lab Print in the Czech Republic, have created works of art. Scale warbirds and a glider right now but it can only be a hop and skip to making unmanned aircraft that can be reproduced at a low cost as fast as your printer can print.

I cannot begin to imagine how many hours of work have gone into designing these machines.

The thought of being able to email an update of a design or indeed an entirely new design to somebody in the field is delicious. It would create an entirely new sort of relationship with drone designers and customers.

Regulators will also have something else to think about.

I’m convinced locally printed drones have a future. I am late to the party.

Read more…
3D Robotics

From MIT:

A new system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is the first to allow users to design, simulate, and build their own custom drone. Users can change the size, shape, and structure of their drone based on the specific needs they have for payload, cost, flight time, battery usage, and other factors.

To demonstrate, researchers created a range of unusual-looking drones, including a five-rotor “pentacopter” and a rabbit-shaped “bunnycopter” with propellers of different sizes and rotors of different heights.

“This system opens up new possibilities for how drones look and function,” says MIT Professor Wojciech Matusik, who oversaw the project in CSAIL’s Computational Fabrication Group. “It’s no longer a one-size-fits-all approach for people who want to make and use drones for particular purposes.”

The interface lets users design drones with different propellers, rotors, and rods. It also provides guarantees that the drones it fabricates can take off, hover and land — which is no simple task considering the intricate technical trade-offs associated with drone weight, shape, and control.

“For example, adding more rotors generally lets you carry more weight, but you also need to think about how to balance the drone to make sure it doesn’t tip,” says PhD student Tao Du, who was first author on a related paper about the system. “Irregularly-shaped drones are very difficult to stabilize, which means that they require establishing very complex control parameters.”

Du and Matusik co-authored a paper with PhD student Adriana Schulz, postdoc Bo Zhu, and Assistant Professor Bernd Bickel of IST Austria. It will be presented next week at the annual SIGGRAPH Asia conference in Macao, China.

Today’s commercial drones only come in a small range of options, typically with an even number of rotors and upward-facing propellers. But there are many emerging use cases for other kinds of drones. For example, having an odd number of rotors might create a clearer view for a drone’s camera, or allow the drone to carry objects with unusual shapes.

Designing these less conventional drones, however, often requires expertise in multiple disciplines, including control systems, fabrication, and electronics.

“Developing multicopters like these that are actually flyable involves a lot of trial-and-error, tweaking the balance between all the propellers and rotors,” says Du. “It would be more or less impossible for an amateur user, especially one without any computer-science background.”

But the CSAIL group’s new system makes the process much easier. Users design drones by choosing from a database of parts and specifying their needs for things like payload, cost, and battery usage. The system computes the sizes of design elements like rod lengths and motor angles, and looks at metrics such as torque and thrust to determine whether the design will actually work. It also uses an “LQR controller” that takes information about a drone’s characteristics and surroundings to optimize its flight plan.

One of the project’s core challenges stemmed from the fact that a drone’s shape and structure (its “geometry”) is usually strongly tied to how it has been programmed to move (its “control”). To overcome this, researchers used what’s called an “alternating direction method,” which means that they reduced the number of variables by fixing some of them and optimizing the rest. This allowed the team to decouple the variables of geometry and control in a way that optimizes the drone’s performance.

“Once you decouple these variables, you turn a very complicated optimization problem into two easy sub-problems that we already have techniques for solving,” says Du. He envisions future versions of the system that could proactively give design suggestions, like recommending where a rotor should go to accommodate a desired payload.

“This is the first system in which users can interactively design a drone that incorporates both geometry and control,” says Nobuyuki Umetani, a research scientist at Autodesk, Inc., who was not involved in the paper. “This is very exciting work that has the potential to change the way people design.”

The project was supported, in part, by the National Science Foundation, the Air Force Research Laboratory and the European Union’s Horizon 2020 research and innovation program.

Read more…
3D Robotics
3689705494?profile=originalJust announced: the next-generation OpenMV cam with the new M7 processor for twice the power and an even lower price ($55). The original OpenMV is my favorite computer vision processor, and this new one looks even better. Preorders open now

The OpenMV Cam M7 board is our next generation OpenMV Cam. It features 1.5-2X the processing power, 2X the RAM, and 2X the flash of the previous OpenMV Cam. In particular, the increased amount of RAM means we have space to JPEG compress images at a much higher quality. Additionally, the MicroPython heap comes with 64KB more space so you can actually create large image copies now. The M7 is a superscaler processor capable of executing 1-2 instructions per clock. So, algorithm speed-ups will vary. But, it's by default faster at 216 MHz than the M4 core at 168 MHz.

In addition to the processor being better the USB connector now has through-hole stress relief pads so you can't rip it off. We also added another I/O pin and exposed the OV7725's frame sync pin so you can sync up two cam's video streams. To make room for more I/O pins we moved all the debug pins to a special debug header. The debug header is mainly for our internal use to test and program OpenMV Cams but you can hookup an ARM Debugger to it too. Other than that we switched out the IR LEDs for surface mount ones. This change along with a few other fixes will allow us to offer the OpenMV Cam at $65 retail.

But, if you pre-order today you can get the new OpenMV Cam for $55. We'll be taking pre-orders for the next 2 months. We need to add $15K to our coffers to afford production of the new board in a 1K+ build run. So, this means we need about 300+ preorders. If you want to see the OpenMV Cam project move forwards please pre-order the new OpenMV Cam M7.

Now, if we can get a lot more sales than just 300 pre-orders this will allow use to lower the prices for shields too which are built in 100 units quantities. Please go on a shopping storm and let everyone else know about it too! Free free to add anything else in our store to your cart and we'll ship them all together.

Note that the final board will be black. The images are from our prototype. Assuming no obstacles we'll start manufacturing OpenMV Cam M7's at the start of February and they should start shipping in March and we'll continue shipping thereafter.


The OpenMV Cam is a small, low power, microcontroller board which allows you to easily implement applications using machine vision in the real-world. You program the OpenMV Cam in high level Python scripts (courtesy of the MicroPython Operating System) instead of C/C++. This makes it easier to deal with the complex outputs of machine vision algorithms and working with high level data structures. But, you still have total control over your OpenMV Cam and its I/O pins in Python. You can easily trigger taking pictures and video on external events or execute machine vision algorithms to figure out how to control your I/O pins.

The OpenMV Cam features:

  • The STM32F765VI ARM Cortex M7 processor running at 216 MHz with 512KB of RAM and 2 MB of flash. All I/O pins output 3.3V and are 5V tolerant. The processor has the following I/O interfaces:
    • A full speed USB (12Mbs) interface to your computer. Your OpenMV Cam will appear as a Virtual COM Port and a USB Flash Drive when plugged in.
    • A μSD Card socket capable of 100Mbs reads/writes which allows your OpenMV Cam to record video and easy pull machine vision assets off of the μSD card.
    • A SPI bus that can run up to 54Mbs allowing you to easily stream image data off the system to either the LCD Shield, the WiFi Shield, or another microcontroller.
    • An I2C Bus, CAN Bus, and an Asynchronous Serial Bus (TX/RX) for interfacing with other microcontrollers and sensors.
    • A 12-bit ADC and a 12-bit DAC.
    • Three I/O pins for servo control.
    • Interrupts and PWM on all I/O pins (there are 10 I/O pins on the board).
    • And, an RGB LED and two high power 850nm IR LEDs.
  • The OV7725 image sensor is capable of taking 640x480 8-bit Grayscale images or 320x240 16-bit RGB565 images at 30 FPS. Your OpenMV Cam comes with a 2.8mm lens on a standard M12 lens mount. If you want to use more specialized lenses with your OpenMV Cam you can easily buy and attach them yourself.

For more information about the OpenMV Cam please see our documentation.

Applications

The OpenMV Cam can be used for the following things currently (more in the future):

  • Frame Differencing
    • You can use Frame Differencing on your OpenMV Cam to detect motion in a scene by looking at what's changed. Frame Differencing allows you to use your OpenMV Cam for security applications.
  • Color Tracking
    • You can use your OpenMV Cam to detect up to 32 colors at a time in an image (realistically you'd never want to find more than 4) and each color can have any number of distinct blobs. Your OpenMV Cam will then tell you the position, size, centroid, and orientation of each blob. Using color tracking your OpenMV Cam can be programmed to do things like tracking the sun, line following, target tracking, and much, much, more.
  • Marker Tracking
    • You can use your OpenMV Cam to detect groups of colors instead of independent colors. This allows you to create color makers (2 or more color tags) which can be put on objects allowing your OpenMV Cam to understand what the tagged objects are.
  • Face Detection
    • You can detect Faces with your OpenMV Cam (or any generic object). Your OpenMV Cam can process Haar Cascades to do generic object detection and comes with a built-in Frontal Face Cascade and Eye Haar Cascade to detect faces and eyes.
  • Eye Tracking
    • You can use Eye Tracking with your OpenMV Cam to detect someone's gaze. You can then, for example, use that to control a robot. Eye Tracking detects where the pupil is looking versus detecting if there's an eye in the image.
  • Optical Flow
    • You can use Optical Flow to detect translation of what your OpenMV Cam is looking at. For example, you can use Optical Flow on a quad-copter to determine how stable it is in the air.
  • Edge/Line Detection
    • You can preform edge detection via either the Canny Edge Detector algorithm or simple high-pass filtering followed by thresholding. After you have a binary image you can then use the Hough Detector to find all the lines in the image. With edge/line detection you can use your OpenMV Cam to easily detect the orientation of objects.
  • Template Matching
    • You can use template matching with your OpenMV Cam to detect when a translated pre-saved image is in view. For example, template matching can be used to find fiducials on a PCB or read known digits on a display.
  • Image Capture
    • You can use the OpenMV Cam to capture up to 320x240 RGB565 (or 640x480 Grayscale) BMP/JPG/PPM/PGM images. You directly control how images are captured in your Python script. Best of all, you can preform machine vision functions and/or draw on frames before saving them.
  • Video Recording
    • You can use the OpenMV Cam to record up to 320x240 RGB565 (or 640x480 Grayscale) MJPEG video or GIF images. You directly control how each frame of video is recorded in your Python script and have total control on how video recording starts and finishes. And, like capturing images, you can preform machine vision functions and/or draw on video frames before saving them.

Finally, all the above features can be mixed and matched in your own custom application along with I/O pin control to talk to the real world.

Pinout

OpenMV Cam Pinout

Schematic & Datasheets

Dimensions

Camera Dimensions

Specifications

Processor ARM® 32-bit Cortex®-M7 CPU
w/ Double Precision FPU
216 MHz (462 DMIPS)
Core Mark Score: 1082
(compare w/ Raspberry Pi Zero: 2060)
RAM Layout 128KB .DATA/.BSS/Heap/Stack
384KB Frame Buffer/Stack
(512KB Total)
Flash Layout 32KB Bootloader
96KB Embedded Flash Drive
1920KB Firmware
(2MB Total)
Supported Image Formats Grayscale
RGB565
JPEG
Maximum Supported Resolutions Grayscale: 640x480 and under
RGB565: 320x240 and under
Grayscale JPEG: 640x480 and under
RGB565 JPEG: 640x480 and under
Lens Info Focal Length: 2.8mm
Aperture: F2.0
Format: 1/3"
Angle (Field-of-View): 115°
Mount: M12*0.5
IR Cut Filter: 650nm (removable)
Electrical Info All pins are 5V tolerant with 3.3V output. All pins can sink or source up to 25mA. P6 is not 5V tolerant in ADC or DAC mode. Up to 120mA may be sinked or sourced in total between all pins. VIN may be between 3.6V and 5V. Do not draw more than 250mA from your OpenMV Cam's 3.3V rail.
Weight 16g
Length 45mm
Width 36mm
Height 30mm

Power Consumption

Idle - No μSD Card 110mA @ 3.3V
Idle - μSD Card 110mA @ 3.3V
Active - No μSD Card 190mA @ 3.3V
Active - μSD Card 200mA @ 3.3V

Temperature Range

Storage
-40°C to 125°C
Operating
-20°C to 70°C
Read more…
3D Robotics

3689705586?profile=originalGreat news from Randy and the ArduCopter team:

Over the past month or so a few of us have been experimenting with using the Pozyx1 system (based on the decawave DWM1000 chip) for non-GPS navigation.

We've made quite good progress and it's just been integrated into ArduPilot master and should go out with Copter-3.5 in a few months. You can see a video of a flight using Pozyx here1:

The system works by setting up 4 anchors (aka "beacons") in a rectangle (see image below). Then a "tag" is mounted to the vehicle (see IRIS above). This tag is a pozyx sensor plus an arduino uno which runs a fairly simple sketch which tells ArduPilot's the length of the sides of the rectangle and the vehicle's distance to each beacon. ArduPilot's EKF has been enhanced to consume this information and come up with a position estimate.
6c936c2c032eae68c9966b24516eba51e01de78e.png

Some extra info:

  • the range of the sensors seems to be a bit longer than 50m.
  • there's some indication that the system suffers from multipathing so it really needs a clear line-of-sight from the vehicle to the beacons.
  • so far we use 4 beacons. The minimum is 3, the EKF supports up to 10.
  • the hardware is quite bulky but we hope smaller hardware will be manufactured by someone eventually. There seems to be interest from several groups to do this.

For developer who want to experiement with the system, we've created a fairly detailed setup wiki page here1.

Some credit for the work so far:

  • idea to use the DWM1000 was brought to the team by long-time ArduPilot developer Jonathan Challinger
  • strategy to use Pozyx and first GPS_INPUT implementation by the Drone Japan School1 members including Kitaoka-san, Matsuura-san and Murata-san.

As per usual, if you're interested in getting involved with ArduPilot development, you can find many of us on Gitter.

Read more…
3D Robotics

Visualizing autopilot behavior with Flight Gear

Great post from Curtis Olson, the lead developer of FlightGear and a long-time autopilot developer, on using simulators to do sophisticated autopilot analysis.

Blending real video with synthetic data yields a powerful and cool! way to visualize your kalman filter (attitude estimate) as well as your autopilot flight controller.

screenshot-from-2016-11-30-14-31-14

Conformal HUD Elements

Conformal definition: of, relating to, or noting a map or transformation in which angles and scale are preserved.  For a HUD, this means the synthetic element is drawn in a way that visually aligns with the real world.  For example: the horizon line is conformal if it aligns with the real horizon line in the video.

  • Horizon line annotated with compass points.
  • Pitch ladder.
  • Location of nearby airports.
  • Location of sun, moon, and own shadow.
  • If alpha/beta data is avaliable, a flight path marker is drawn.
  • Aircraft nose (i.e. exactly where the aircraft is pointing towards.)

Nonconformal HUD Elements

  • Speed tape.
  • Altitude tape.
  • Pilot or autopilot ‘stick’ commands.

Autopilot HUD Elements

  • Flight director vbars (magenta).  These show the target roll and pitch angles commanded by the autopilot.
  • Bird (yellow).  This shows the actual roll and pitch of the aircraft.  The autopilot attempts to keep the bird aligned with the flight director using aileron and elevator commands.
  • Target ground course bug (show on the horizon line) and actual ground course.
  • Target airspeed (drawn on the speed tape.)
  • Target altitude (drawn on the altitude tape.)
  • Flight time (for referencing the flight data.)

Case Study #1: EKF Visualization (above)

What to watch for:

  • Notice the jumpiness of the yellow “v” on the horizon line.  This “v” shows the current estimated ground track, but the jumpiness points to an EKF tuning parameter issue that has since been resolved.
  • Notice a full autonomous wheeled take off at the beginning of the video.
  • Notice some jumpiness in the HUD horizon and attitude and heading of the aircraft.  This again relates back to an EKF tuning issue.

I may never have noticed the EKF tuning problems had it not been for this visualization tool.

Case Study #2: Spin Testing

What to watch for:

  • Notice the flight path marker that shows actual alpha/beta as recorded by actual alpha/beta airdata vanes.
  • Notice how the conformal alignment of the hud diverges from the real horizon especially during aggressive turns and spins.  The EKF fits the aircraft attitude estimate through gps position and velocity and aggressive maneuvers lead to gps errors (satellites go in and out of visibility, etc.)
  • Notice that no autopilot symbology is drawn because the entire flight is flown manually.

Case Study #3: Skywalker Autopilot

What to watch for:

  • Notice the yellow “v” on the horizon is still very jumpy.  This is the horizontal velocity vector direction which is noisy due to EKF tuning issues that were not identified and resolved when this video was created.  In fact it was this flight where the issue was first noticed.
  • Notice the magenta flight director is overly jumpy in response to the horizontal velocity vector being jumpy.  Every jump changes the current heading error which leads to a change in roll command which the autopilot then has to chase.
  • Notice the flight attitude is much smoother than the above Senior Telemaster flight.  This is because the skywalker EKF incorporates magnetometer measurements as well as gps measurements and this helps stabilize the filter even with poor noise/tuning values.
  • You may notice some crazy control overshoot on final approach.  Ignore this!  I was testing an idea and got it horribly wrong.  I’m actually surprised the landing completed successfully, but I’ll take it.
  • Notice in this video the horizon stays attached pretty well.  Much better than in the spin-testing video due to the non-aggressive flight maneuvers, and much better than the telemaster video due to using a more accurate gps: ublox7p versus ublox6.  Going forward I will be moving to the ublox8.

Read more…
3D Robotics

Very cool hack, as reported by Hackaday:

The HTC Vive’s Lighthouse localization system is one of the cleverest things we’ve seen in a while. It uses a synchronization flash followed by a swept beam to tell any device that can see the lights exactly where it is in space. Of course, the device has to understand the signals to figure it out.

[Alex Shtuchkin] built a very well documented device that can use these signals to localize itself in your room. For now, the Lighthouse stations are still fairly expensive, but the per-device hardware requirements are quite reasonable. [Alex] has the costs down around ten dollars plus the cost of a microcontroller if your project doesn’t already include one. Indeed, his proof-of-concept is basically a breadboard, three photodiodes, op-amps, and some code.

His demo is awesome! Check it out in the video below. He uses it to teach a quadcopter to land itself back on a charging platform, and it’s able to get there with what looks like a few centimeters of play in any direction — more than good enough to land in the 3D-printed plastic landing thingy. That fixture has a rotating drum that swaps out the battery automatically, readying the drone for another flight.

If this is just the tip of the iceberg of upcoming Lighthouse hacks, we can’t wait!

We loved the Lighthouse at first sight, and we’ve been following its progress into a real product. Heck, we’ve even written up a previous DIY Lighthouse receiver built by [Trammell Hudson]. It’s such an elegant solution to the problem of figuring out where your robot is that we get kinda gushy. Beg your pardon.

3689705419?profile=original

Read more…
3D Robotics

Geoff Barrows at Centeye has been working for many years on his small, low-cost vision chips and they're really starting to get good. See the above video for visual obstacle avoidance on the Crazyflie nano-drone platform. Sadly you can't buy these boards (Geoff mostly does custom engineering for defense industry clients), but maybe we can convince him to make them a product? 

More details here

Read more…
3D Robotics

Indoor swarming using ultrawideband positioning

3689705265?profile=originalThe great Bitcraze team in Sweden who created the Crazyflie nano drones have now added an ultrawideband indoor positioning system called Loco Position (it uses the same Decawave chip as the Pozyx system that I use). They're still bringing it up, but have just hit a milestone: swarming! Over to them:

Last week we reached a milestone for our Loco Positioning System: we got 5 Crazyflie 2.0 to fly in a swarm with Time Difference of Arrival measurements. This is a great step closer to making the LPS leave the early-access state.

Until now, positioning has been done using a method called Two Way Ranging (TWR). The advantage of TWR ranging is that it allows us to easily get ranges to the anchors by actively pinging them in sequence. Based on these ranges we can then calculate the current Crazyflie position and control the Crazyflie to move to a wanted position. The big drawback though is that since each Crazyflie has to actively transmit packets to ping anchors, flying many Crazyflie means sharing the air and so the more we want to fly the less ranging each Crazyflie can do. In other words: it does not scale.

TDoA measurement consist of measuring the difference of flight time between packets coming from different anchors and this is harder to achieve since the anchor clocks must be synchronized to each other. The killer feature of TDoA is that it can be implemented using unidirectional packet sent from the anchor system and received by the tag/Crazyflie. It means that as soon as you get one Crazyflie flying with TDoA, you can get as many as you want since the Crazyflies do not have to transmit anything.

This is what happened last week: on Thursday evening we got 1 Crazyflie to fly with TDoA measurements. On Friday we tried 3 and then 5 without much effort. It was just matter of modifying the ROS launchfile to connect more crazyflies, a copy-paste operation.

There still seems to be a margin for progression to get even more stable flight with TDoA and we are also working on making the LPS and Swarm work with our Python client which will make it easier to use outside a robotic lab.

If you want to try the (very experimental!) TDoA mode with your loco positioning system we have documented how to get it to work on the wiki.

Thanks a lot to the growing community that is supporting us and allow us to move faster towards a Crazyflie swarm.

Read more…