Chris Anderson's Posts (2711)

3D Robotics

7202795701?profile=RESIZE_710x

I'm a big fan of the Marvelmind indoor positioning system, which is inexpensive, accurate (2cm) and quite easy to use. They've now put together a tutorial page on how to use it with drones, both PX4 and Ardupilot:

------

Marvelmind Indoor "GPS" supports PixHawk with ArduPilot and PX4

 

– Marvelmind and ArduPilot – link to ArduPilot.org

 

– Marvelmind and PX4 Integration Manual – step by step guidance with settings and screenshots for Mission Planner, PixHawk and the HW connectivity

 

– PixHawk and Marvelmind Integration Manual – step by step guide for PixHawk, ArduPilot and Marvelmind integration for drones

 

Build indoor positioning system for quadcopters properly

 

There are quite many, but rather basic aspects that have to be taken into account to successfully fly indoor:

 

– Autonomous copter settings manual – basic and practical recommendations for setting up of Indoor “GPS” system for usage with autonomous copters/drones indoor and outdoor

 

– Placement Manual – practical advises and examples of how to mount the Marvelmind Indoor “GPS” system to achieve the best performance in different applications and configurations

– Check the slides about drones

– Check the slides about Precise Z

 

– Help: Z-coordinates for copters – long explanation – YouTube explanation how to place the stationary beacons properly to get good Z accuracy. If you can’t use the advises, because your environment doesn’t let you, use the Precise Z configuration with 4+2 stationary beacons

 

 

Examples of precise indoor positioning and navigation for drones

 

Precisely (±2cm) tracking DJI Phantom quadcopter indoor in 3D (XYZ)

 
 

– Precise tracking in X,Y,Z (XY view + XZ view + YZ view)

 

– Raw data and post-processed data from Dashboard’s Player – notice, that today the same is available not in the post-processing, but in the Real-Time Player

 

– The DJI eco-system is closed, at least, the Phantom and Mavic series. Thus, it is possible to track the Phantom, but not fly autonomously indoor (without deeper hacking)

 

 

 
 

Precisely (±2cm) tracking DJI Phantom quadcopter outdoor in 3D (XYZ)

 
 

– Precise (±2cm) tracking in XYZ (XY view + XZ view + YZ view) – the same as above, but outdoor

 

– In this demo and in the demo above, the same Precise-Z configuration consisting of 4+2 stationary beacons is used. See more in the Placement Manual

 
 

Fully autonomous flight indoor

 
 

– Small copter is flying fully autonomously relying on Marvelmind Indoor “GPS”

 
 

Indoor tracking small and micro-drones

 
 

It is possible to track even micro-drones (less than 100g) with the help of Mini-TX beacons.

 

Starter Set NIA-SmallDrone is specifically designed for these kind of drones.

 

 

 
 

Recommended Marvelmind configurations for drones

 
 

The minimum configuration for the drone tracking would be any NIA set with 3D capability. For example, 3 stationary beacons + 1 mobile beacon + 1 modem and Non-Inverse Architecture (NIA) or Multi-Frequency NIA (MF NIA) would be already OK for the drone.

 

See the Products page for different starter sets options.


However, just 3 stationary beacons would have little resiliency against obstructions. Any occlusion of any stationary beacon – non-line of sight/hearing situation – will lead to no tracking or erroneous tracking. Very much ike in GPS: “no satellites visibility = no GPS coordinates = no tracking”.

Thus, we recommend, at least, N+1 redundancy for stationary beacons. And that is why our starter sets for 3D consist of 4 stationary beacons.

Even better is to have 2N redundancy with fully overlapping 3D submaps. That would be either 3+3 or 4+4 stationary beacons. The system would automatically choose the best submap for tracking. That kind of system is very resilient and with proper placement of the beacons, you can fly even in complex rooms with columns, for example, without issues with tracking.

 

The key for the great tracking is to provide proper coverage at any flight point, i.e. the mobile beacons on the drone must have 3 or more stationary beacons belonging to the same submap with clear direct line of sight/hearing within 30m.

 

Proper placement is the key usually and particularly important to drones, because they require 3D; the drones are fast; and the mistakes may be particularly costly. What to pay attention to?

 

– The single most important requirement for good tracking or autonomous flight – provide clear line of sight/hearing visibility from the mobile beacons on the drone to 3 or more stationary beacons

 

– Don’t rely on magnetometers indoor. Use the Paired Beacons configuration for Location+Direction

 

– Place stationary beacons so that angles from the mobile beacons to the stationary beacons would be 30 degree or more. See a longer explanation in the video

 

– Use the Precise-Z configurations when not possible to achieve proper angles to the stationary beacons otherwise

 

 

 

 

Read more…
3D Robotics
From Hackster
 
 

A team of researchers from Japan and Vietnam have published a paper detailing a novel image processing algorithm capable of reading floor features accurately enough to allow drones to navigate autonomously indoors using a simple low-resolution camera.

There's nothing new about the concept of autonomous drones, but technologies which work well for navigation outdoors — in particular GPS and other global navigation satellite systems (GNSSes) — don't always translate well to indoor use. "We considered different hardware options, including laser rangefinders," explains lead author Chinthaka Premachandra of his team's work. "But rangefinders are too heavy, and infrared and ultrasonic sensors suffer from low precision. So that led us to using a camera as the robot's visual sensor. If you think of the camera in your cell phone, that gives you an idea of just how small and light they can be."

The prototype developed uses a Raspberry Pi 3 single-board computer and a low-cost low-resolution camera fitted to a small off-the-shelf quadcopter drone driven by a Holybro Pixhawk 4 flight controller. The camera takes an 80x80 resolution snapshot of the floor underneath it, then analyses it to infer its movement. "Our robot only needed to distinguish its direction of motion and identify corners," Premachandra notes. "From there, our algorithm allows it to extrapolate its position in the room, helping it avoid contacting the walls."

 

There's a catch, of course: While the prototype proved effective, it was keying on the edges of tiles used in the test room's flooring. As a result, the work isn't immediately transferable to rooms with other types of flooring — particular carpeting without a repeating, reliable pattern. Nevertheless, Premachandra's predicts that the technology — or a future variant using infrared cameras — could be useful "in warehouses, distribution centers, and industrial applications to remotely monitor safety."

The paper has been published under open-access terms in the IEEE/CAA Journal of Automatica Sinica.

Read more…
3D Robotics

 3689342740?profile=RESIZE_710x

ProJet MQ-9 Reaper (fiberglass and balsa -- very nice) (gone) 

Folks, I'm moving house and it's time get realistic about all the unopened RC plane kits I seem to have accumulated over the years. These are all in the box, ready to fly (minus RC and batteries).

All free to first bidder! If you can pick them up in Berkeley, CA, they're all yours. Once the local pickups go, I'll consider shipping them elsewhere in the US if you'll pay shipping. 

Here's the list of what's available (plus the Reaper above). Again: these are all in kit form in the box-- they've never been assembled or flown, so they should be in mint condition. 

DM me if you want one of them and can pick it up

 

Two Nitroplanes RQ-11 drones (fiberglass and balsa, all finished(both gone)

 

Global Hawk ducted fan RC model (gone)

6996264480?profile=RESIZE_710x

6996270871?profile=RESIZE_710x

 

 

Sky Arrow (gone)

6995967071?profile=RESIZE_710x

 

Easy Fly ST330 Easy Glider clone: (gone)

6996056498?profile=RESIZE_710x

 

Busy Bee (gone)6996106064?profile=RESIZE_584x

Sky Surfer (gone)

6996300674?profile=RESIZE_710x

 

Hobby King EPP-FPV

6996332476?profile=RESIZE_710x

 

GWS Slow Stick

6995885066?profile=RESIZE_584x

 

 

 

 

Read more…
3D Robotics

From Hackaday:

With lockdown regulations sweeping the globe, many have found themselves spending altogether too much time inside with not a lot to do. [Peter Hall] is one such individual, with a penchant for flying quadcopters. With the great outdoors all but denied, he instead endeavoured to find a way to make flying inside a more exciting experience. We’d say he’s succeeded.

The setup involves using a SteamVR virtual reality tracker to monitor the position of a quadcopter inside a room. This data is then passed back to the quadcopter at a high rate, giving the autopilot fast, accurate data upon which to execute manoeuvres. PyOpenVR is used to do the motion tracking, and in combination with MAVProxy, sends the information over MAVLink back to the copter’s ArduPilot.

While such a setup could be used to simply stop the copter crashing into things, [Peter] doesn’t like to do things by half measures. Instead, he took full advantage of the capabilities of the system, enabling the copter to fly aggressively in an incredibly small space.

It’s an impressive setup, and one that we’re sure could have further applications for those exploring the use of drones indoors. We’ve seen MAVLink used for nefarious purposes, too. 

Read more…
3D Robotics

It’s now been a couple weeks since Nvidia released its new Jetson Xavier NX board, a $399 big brother to the Jetson Nano (and successor to the TX2) with 5-10 times the compute performance of the Nano (and 10-15x the performance of a RaspberryPi 4) along with twice as much memory (8 Gb). It comes with a similar carrier board as the Nano, with the same Raspberry Pi GPIO pins, but includes built-in Wifi/BT and a SSD card slot, which is a big improvement over the Nano.

How well does it suit DIY Robocars such as Donkeycar? Well, there are pluses and minuses:

Pros:

  • All that computing power means that you run deeper learning models with multiple camera at full resolution. You can’t beat it for performance.
  • It also means that you can do your training on-car, rather than having to export to AWS or your laptop
  • Built-in wifi is great
  • Same price but smaller and way more powerful than a TX2.

Cons:

  • Four times the price of Nano
  • The native carrier board for the Jetson NX runs at 12-19v, as opposed the Nano, which runs at 5v. That means that the regular batteries and power supplies we use with most cars that use Raspberry Pi or Nano won’t work. You have two options:
    • 1) Use a step-up voltage converter like this
    • 2) Use a Nano’s carrier board if you have one. But you can’t use just any one! The NX will only work with the second-generation Nano carrier board, the one with two camera inputs (it’s called B-01)
  • When it shipped, the NX had the wrong I2C bus for the RPi-style GPIO pins (it used the bus numbers from the older TX2 board rather than the Nano, which is odd because it shares a form factor with the Nano). After I brought this two Nvidia’s attention they said they would release a utility that allows you to remap the I2C bus/pins. Until then, RPi I2C peripherals won’t work unless they allow you to reset their bus to #8 (as opposed to the default #1). Alternatively, if your I2C peripheral has wires to connect to the pins (as opposed to a fixed header) you can use the NX’s pins 27 and 28 rather than the usual 3 and 5, and that will work on Bus 1

I’ve managed to set up the Donkey framework on the Xavier NX and there were a few issues, mostly involving that fact that it ships with the new Jetpack 4.4, which requires newer version of TensorFlow than the standard Donkey setup. The Donkey docs and installation scripts are being updated to address that and I’m hoping that by the time you read this the setup should be seamless and automatic.

I’ll also be trying it with the new Nvidia Isaac robotic development system. Although the previous version of Isaac didn’t work with the Xavier NX, version 2020.1 just came out so fingers crossed this works out of the box.

Read more…
3D Robotics

Welcome to the new DIY Drones design!

5395035462?profile=RESIZE_400x

You may have noticed that DIY Drones looks a little different today. That's because we finally switched over to the Ning 3.0 hosting framework, which offers a bunch of advantages along with continuity with the exiting content, membership and basic flow. Although Ning 3.0 was introduced back in 2013, Ning has changed hands since then and the development was not really complete until last year. So we waited until everything was stable to make the change.

Here are some of the new features that you may notice:

  • Works great on mobile! Finally, a responsive design that works on any size screen, taking advantage of the full width and height on any device.
  • Social sharing is built in (Twitter, Facebook, LinkedIn)
  • Wider layout takes advantage of larger screens, more open design
  • A lot of behind-the-scenes tools to make managing and moderating the site eaiser
  • Overall, we've cleaned up the site and removed older unused features. 

All your content and membership information should be transfered intact, but please let me know if anything is missing.

There are probably still a few glitches that we'll clean up over the next few days, but overall this should carry us well into our second decade!

Known bugs/items that we're working on:

  • Content from the old groups is not showing up. While we sort this out, you can get access to them on the old site here.
  • We're removed some navigation elements from the old site to simplify this one. If you're really missing something, let me know
  • We're debating between full-screen width (more spread out, but can get really sloppy on very wide screens) or fixed 1080 width (what it currently is).

Here's a screenshot of the "before" (it's a lot narrower)

5433481877?profile=RESIZE_710x

 

 

Read more…
3D Robotics

Some academics at the University of Toronto have released a paper showing different techniques in correcting the position errors in the Crazyflie ultrawideband-based indoor localization tech.  None of them are perfect, but it's interesting to see what works best

"

Accurate indoor localization is a crucial enabling technology for many robotic applications, from warehouse management to monitoring tasks. Ultra-wideband (UWB) localization technology, in particular, has been shown to provide robust, high-resolution, and obstacle-penetrating ranging measurements. Nonetheless, UWB measurements are still corrupted by non-line-of-sight (NLOS) communication and spatially-varying biases due to doughnut-shaped antenna radiation pattern. In our recent work, we present a lightweight, two-step measurement correction method to improve the performance of both TWR and TDoA-based UWB localization.  We integrate our method into the Extended Kalman Filter (EKF) onboard a Crazyflie and demonstrate a closed-loop position estimation performance with ~20cm root-mean-square (RMS) error.

https://www.bitcraze.io/wp-content/uploads/2020/04/Selection_003-288x300.png 288w, https://www.bitcraze.io/wp-content/uploads/2020/04/Selection_003-768x801.png 768w, https://www.bitcraze.io/wp-content/uploads/2020/04/Selection_003-240x250.png 240w" sizes="(max-width: 459px) 100vw, 459px" /> A stylized depiction of our UWB indoor localization system and the schematics of the proposed estimation framework.

Methodology

UWB measurement errors can be separated into two groups: (1) systematic bias caused by limitations in the UWB antenna pattern and (2) spurious measurements due to NLOS and multi-path propagation. We propose a two-step UWB bias correction approach exploiting machine learning (to address(1)) and statistical testing (to address (2)). The data-driven nature of our approach makes it agnostic to the origin of the measurement errors it corrects. "

More here

Read more…
3D Robotics

Very cool DIY F35

Not a drone (yet), but DIY and very well done. From Hackaday:

he advent of affordable gear for radio-controlled aircraft has made the hobby extremely accessible, but also made it possible to build some very complex flying machines on a budget, especially when combined with 3D printing. [Joel Vlashof] really likes VTOL fighter aircraft and is in the process of building a fully functional radio-controlled F-35B.

The F-35 series of aircraft is one of the most expensive defence project to date. The VTOL capable “B” variant is a complex machine, with total of 19 doors on the outside of the aircraft for weapons, landing gear and thrusters. The thruster on the tail can pivot 90° down for VTOL operations, using an interesting 3-bearing swivel mechanism.

https://hackaday.com/wp-content/uploads/2020/04/2020-04-26-8.png?resize=250,250 250w, https://hackaday.com/wp-content/uploads/2020/04/2020-04-26-8.png?resize=400,400 400w, https://hackaday.com/wp-content/uploads/2020/04/2020-04-26-8.png?resize=625,625 625w" sizes="(max-width: 400px) 100vw, 400px" />[Joel] wants his model to be as close as possible to the real thing, and has integrated all these features into his build. Thrust is provided by two EDF motors, the pivoting nozzle is 3D printed and actuated by three set of small DC motors, and all 5 doors for VTOL are actuated by a single servo in the nose via a series of linkages. For tilt control, air from the main fan is channeled to the wing-tips and controlled by servo-actuated valves. A flight controller intended for use on a multi-rotor is used to help keep the plane stable while hovering. One iteration of this plane bit the dust during development, but [Joel] has done successful test flights for both hover and conventional horizontal flight.  The really tricky part will be transitioning between flight modes, and [Joel] hopes to achieve that in the near future.

Read more…
3D Robotics

Very cool research from Microsoft on using their AirSim simulators to train racing drones for the real world:

Humans subconsciously use perception-action loops to do just about everything, from walking down a crowded sidewalk to scoring a goal in a community soccer league. Perception-action loops—using sensory input to decide on appropriate action in a continuous real time loop —are at the heart of autonomous systems. Although this tech has advanced dramatically in the ability to use sensors and cameras to reason about control actions, the current generation of autonomous systems are still nowhere near human skill in making those decisions directly from visual data. Here, we share how we have built Machine Learning systems that reason out correct actions to take directly from camera images. The system is trained via simulations and learns to independently navigate challenging environments and conditions in real world, including unseen situations.

Read the Paper                                Download the Code                                Watch the Video

We wanted to push current technology to get closer to a human’s ability to interpret environmental cues, adapt to difficult conditions and operate autonomously. For example, in First Person View (FPV) drone racing, expert pilots can plan and control a quadrotor with high agility using a noisy monocular camera feed, without compromising safety. We were interested in exploring the question of what it would take to build autonomous systems that achieve similar performance levels. We trained deep neural nets on simulated data and deployed the learned models in real-world environments. Our framework explicitly separates the perception components (making sense of what you see) from the control policy (deciding what to do based on what you see). This two-stage approach helps researchers interpret and debug the deep neural models, which is hard to do with full end-to-end learning.

The ability to efficiently solve such perception-action loops with deep neural networks can have significant impact on real-world systems. Examples include our collaboration with researchers at Carnegie Mellon University and Oregon State University, collectively named Team Explorer, on the DARPA Subterranean (SubT) Challenge. The DARPA challenge centers on assisting first responders and those who lead search and rescue missions, especially in hazardous physical environments, to more quickly identify people in need of help.

The video above shows the DARPA Subterranean Challenge, one of the ways Microsoft is advancing state of art in the area of autonomous systems by supporting research focused on solving real-world challenges. Learn more about Microsoft Autonomous systems.

Team Explorer has participated in the first two circuits of the challenge, taking second place in the February, 2020 Urban Circuit and first place in the September, 2019 Tunnel Circuit. In the Tunnel Circuit, the robots navigated underground tunnels for an hour at a time to successfully locate hidden items. In the Urban Circuit, they navigated two courses designed to represent complex urban underground infrastructure, including stairs and elevation changes. Reasoning correct control actions based on perception sensors is a critical component to success of the mission. The current methods used by Team Explorer include carefully engineered modules, such as localization, mapping and planning, which were then carefully orchestrated to carry out the mission. Here, we share how an approach of learning to map perception data to correct control actions can simplify the system further.

sim photohttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-1-300x153.png 300w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-1-768x393.png 768w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-1-1536x785.png 1536w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-1-2048x1047.png 2048w" sizes="(max-width: 1024px) 100vw, 1024px" />

Figure 1. Our framework uses simulation to learn a low-dimensional state representation using multiple data modalities. This latent vector is used to learn a control policy which directly transfers to real-world environments. We successfully deploy the system under various track shapes and weather conditions, ranging from sunny days to strong snow and wind.

The Task

In first person view (FPV) drone racing, expert pilots can plan and control a quadrotor with high agility using a noisy monocular camera feed, without compromising safety. We attempted to mimic this ability with our framework, and tested it with an autonomous drone on a racing task.

We used a small agile quadrotor with a front facing camera, and our goal was to train a neural network policy to navigate through a previously unknown racing course. The network policy used only images from the RGB camera.

While autonomous drone racing is an active research area, most of the previous work so far has focused on engineering a system augmented with extra sensors and software with the sole aim of speed. Instead, we aimed to create a computational fabric, inspired by the function of a human brain, to map visual information directly to correct control actions. We achieved this by first converting the high-dimensional sequence of video frames to a low-dimensional representation that summarizes the state of the world.

techhttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-2-300x225.png 300w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-2-80x60.png 80w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-2-240x180.png 240w" sizes="(max-width: 608px) 100vw, 608px" />

Figure 2: Quadrotor used for the experiments. Images from the front-facing camera are processed on the onboard computer.

Our Approach

Our approach was to learn a visuomotor policy by decomposing the problem into the tasks of (1) building useful representations of the world and (2) taking a control action based on those representations. We used AirSim, a high-fidelity simulator, in the training phase and then deployed the learned policy in the real world without any modification. Figure 1 depicts the overall concept, showing a single perception module shared for simulated and real autonomous navigation.

A key challenge here is the models have to be robust to the differences (e.g., illumination, texture) between simulation and the real world. To this end, we used the Cross-Modal Variational Auto Encoder (CM-VAE) framework for generating representations that closely bridge the simulation-reality gap, avoiding overfitting to the eccentricities of synthetic data.

The first data modality considered the raw unlabeled sensor input (FPV images), while the second characterized state information directly relevant for the task at hand. In the case of drone racing, the second modality corresponds to the relative pose of the next gate defined in the drone’s coordinate frame. We learned a low-dimensional latent environment representation by extending the CM-VAE framework. The framework uses an encoder-decoder pair for each data modality, while constricting all inputs and outputs to and from a single latent space (see Fig. 3b).

The system naturally incorporated both labeled and unlabeled data modalities into the training process of the latent variable. Imitation learning was then used to train a deep control policy that mapped latent variables into velocity commands for the quadrotor (Fig. 3a).

diagramhttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/Darpa-3-300x116.jpg 300w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/Darpa-3-1024x395.jpg 1024w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/Darpa-3-768x296.jpg 768w" sizes="(max-width: 1097px) 100vw, 1097px" />

Figure 3. (a) Control system architecture. The input image from the drone’s video is encoded into a latent representation of the environment. A control policy acts on the lower-dimensional embedding to output the desired robot control commands. (b) Cross-modal VAE architecture. Each data sample is encoded into a single latent space that can be decoded back into images, or transformed into another data modality such as the poses of gates relative to the unmanned aerial vehicle (UAV).

Learning to understand the world

The role of our perception module was to compress the incoming input images into a low-dimensional representation. For example, the encoder compressed images of size 128 X 72 in pixels (width X height) from 27,648 original parameters (considering three color channels for RGB) down to the most essential 10 variables that can describe it.

We interpreted the robot’s understanding of the world by visualizing the latent space of our cross-modal representations (see Figure 4). Despite only using 10 variables to encode images, the decoded images provided a rich description of what the drone can see ahead, including all possible gates sizes and locations, and different background information.

charthttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/Fig-4-DARPA-300x202.jpg 300w" sizes="(max-width: 636px) 100vw, 636px" />

Figure 4. Visualization of imaginary images generated from our cross-modal representation. The decoded image directly captures the relative gate pose background information.

We also showed that this dimensionality compression technique is smooth and continuous. Figure 5 displays a smooth imaginary path between two images taken in real life. Given the cross-modal nature of the representation, we can see both decoded images and gate poses for the intermediate values.

diagramhttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-7-300x76.png 300w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-7-768x196.png 768w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-7-1536x391.png 1536w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-7.png 1951w" sizes="(max-width: 1024px) 100vw, 1024px" />

Figure 5: Visualization of smooth latent space interpolation between two real-world images. The ground-truth and predicted distances between camera and gate for images A and B were (2.0, 6.0) and (2.5, 5.8) meters respectively.

Results

To show the capabilities of our approach on a physical platform, we tested the system on a 45-meter-long S-shaped track with 8 gates, and on a 40-meter-long circular track with 8 gates, as shown in Figure 6. Our policy using a cross-modal representation significantly outperformed end-to-end control policies and networks that directly encoded the position of the next gates, without reasoning over multiple data modalities. To show the capabilities of our approach on a physical platform, we test the system on an S-shaped track with eight gates and 45 meters of length, and on a circular track with eight gates and 40 meters of length, as shown in Figure 6. Our policy that uses a cross-modal representation significantly outperforms end-to-end policies, and networks that directly encode the position of the next gates, without reasoning over multiple data modalities.

imageshttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-7-300x178.jpg 300w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-7-768x456.jpg 768w" sizes="(max-width: 864px) 100vw, 864px" />

Figure 6: Side and top view of the test tracks: a) Circuit track, and b) S-shape track.

The performance of standard architectures dropped significantly when deployed in the real-world after training in simulation. Our cross-modal VAE, on the other hand, can still decode reasonable values for the gate distances despite being trained purely on simulation. For example, Fig. 7 displays the accumulated gate poses decoded from direct image to pose regression and from our framework, during three seconds of a real flight test. Direct regression results in noisy estimated gate positions, which are farther from the gate’s true location.

diagramhttps://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-9-300x124.jpg 300w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-9-768x318.jpg 768w, https://www.microsoft.com/en-us/research/uploads/prod/2020/03/DARPA-9.jpg 1353w" sizes="(max-width: 1024px) 100vw, 1024px" />

Fig 7. Analysis of a three-second flight segment. a) Input images and their corresponding images decoded by the CM-VAE; b) Time history of gate center poses decoded from the CM-VAE (red) and regression (blue). The regression representation has significantly higher offset and noise from the true gate pose, which explains its poor flight performance.

We take our perception-control framework to its limits by testing it in visual conditions never seen before during the training phase in simulation. Fig. 8 shows examples of successful test cases under extreme visually-challenging conditions: a) indoors, with a blue floor containing red stripes with the same red tone as the gates, and Fig. 8 b-c) during heavy snows. Despite the intense visual distractions from background conditions, the drone was still able to complete the courses by employing our cross-modal perception module.

 

Challenges and Future

By separating the perception-action loop into two modules and incorporating multiple data modalities into the perception training phase, we can avoid overfitting our networks to non-relevant characteristics of the incoming data. For example, even though the sizes of the square gates were the same in simulation and physical experiments, their width, color, and even intrinsic camera parameters are not an exact match. The multiple streams of information that are fed into the cross-modal VAE aid in implicit regularization of the learned model, which leads to better generalization over appearance changes.

We believe our results show great potential for helping in real-world applications. For example, if an autonomous search and rescue robot is better able to recognize humans in spite of differences in age, size, gender, ethnicity and other factors, that robot has a better chance of identifying and retrieving people in need of help.

An unexpected result we came across during our experiments is that combining unlabeled real-world data with the labeled simulated data for training the representation models did not increase overall performance. Using simulation-only data worked better. We suspect that this drop in performance occurs because only simulated data was used in the control learning phase with imitation learning. One interesting direction for future work we are investigating is the use of adversarial techniques for lowering the distance in latent space between similar scenes encoded from simulated and real images. This would lower the difference between data distributions during training and testing phases.

We envision extending the approach of using unlabeled data for policy learning. For example, besides images, can we combine distinct data modalities such as laser measurements and even sound for learning representations of the environment? Our success with aerial vehicles also suggests the potential to apply this approach to other real-world robotics tasks. For instance, we plan to extend our approach to robotic manipulation which also requires a similar ability to interpret inputs in real time and make decisions while ensuring safe operations.

Read more…
3D Robotics

Using a drone to measure the wind

5334391693?profile=original

Cool research from Virginia Tech using a 3DR Solo to do wind measurement:

All multirotor drones can be used as wind sensors, providing wind profiles on demand, anywhere, with higher spatiotemporal resolution and a fraction of the cost of other methods. See the free open-access paper https://lnkd.in/em95QEa on how to obtain wind profiles from the drone's dynamic response to wind-induced perturbations. The study was led by soon-to-graduate Virginia Tech

Read more…
3D Robotics

5334388669?profile=original

From Microsoft Research:

The past few years have seen tremendous progress in reinforcement learning (RL). From complex games to robotic object manipulation, RL has qualitatively advanced the state of the art. However, modern RL techniques require a lot for success: a largely deterministic stationary environment, an accurate resettable simulator in which mistakes – and especially their consequences – are limited to the virtual sphere, powerful computers, and a lot of energy to run them. At Microsoft Research, we are working towards automatic decision-making approaches that bring us closer to the vision of AI agents capable of learning and acting autonomously in changeable open-world conditions using the limited onboard compute. Project Frigatebird is our ambitious quest in this space, aimed at building intelligence that can enable small fixed-wing uninhabited aerial vehicles (sUAVs) to stay aloft purely by extracting energy from moving air.

Let’s talk hardware

Snipe 2, our latest sUAV, pictured above, exemplifies Project Frigatebird’s hardware platforms. It is a small version of a special type of human-piloted aircraft known as sailplanes, also called gliders. Like many sailplanes, Snipe 2 doesn’t have a motor; even sailplanes that do, carry just enough power to run it for only a minute or two. Snipe 2 is hand-tossed into the air to an altitude of approximately 60 meters and then slowly descends to the ground—unless it finds a rising air current called a thermal (see Figure 2) and exploits it to soar higher. For human pilots in full-scale sailplanes, travelling hundreds of miles solely powered on these naturally occurring sources of lift is a popular sport. For certain birds like albatrosses or frigatebirds, covering great distances in this way with nary a wing flap is a natural-born skill. A skill that we would very much like to bestow on Snipe 2’s AI.

Figure 1: the layout of hardware for autonomous soaring in Snipe 2's narrow fuselage.

Figure 1: the layout of hardware for autonomous soaring in Snipe 2’s narrow fuselage.

Snipe 2’s 1.5 meter-wingspan airframe weighs a mere 163 grams, its slender fuselage only 35 mm wide at its widest spot. Yet it carries an off-the-shelf Pixhawk 4 Mini flight controller and all requisite peripherals for fully autonomous flight (see Figure 1.) This “brain” has more than enough punch to run our Bayesian reinforcement learning-based soaring algorithm, POMDSoar. It can also receive a strategic, more computationally heavy, navigation policy over the radio from a laptop on the ground, further enhancing the sUAV’s ability to find columns of rising air. Alternatively, Snipe 2 can house more powerful but still sufficiently compact hardware such as Raspberry Pi Zero to compute this policy onboard. Our larger sailplane drones like the 5-meter wingspan Thermik XXXL can carry even more sophisticated equipment, including cameras and a computational platform for processing their data in real time for hours on end. Indeed, nowadays the only barrier preventing winged drones from staying aloft for this long on atmospheric energy alone in favorable weather is the lack of sufficient AI capabilities.

Reaching higher

Why is building this intelligence hard? Exactly because of the factors that limit modern RL’s applicability. Autopilots of conventional aircraft are built on fairly simple control-based approaches. This strategy works because an aircraft’s motors, in combination with its wings, deliver a stable source of lift, allowing it to “overpower” most of variable factors affecting its flight, for example, wind. Sailplanes, on the other hand, are “underactuated” and must make use of – not overpower – highly uncertain and non-stationary atmospheric phenomena to stay aloft. Thermals, the columns of upward-moving air in which hawks and other birds are often seen gracefully circling, are an example of these stochastic phenomena. A thermal can disappear minutes after appearing, and the amount of lift if provides varies across its lifecycle, with altitude, and with distance from the thermal center. Finding thermals is a difficult problem in itself. They cannot be seen directly; a sailplane can infer their size and location only approximately. Human pilots rely on local knowledge, ground features, observing the behavior of birds and other sailplanes, and other cues, in addition to instrument readings, to guess where thermals are. Interpreting some of these cues involves simple-sounding but nontrivial computer vision problems—for example, estimating distance to objects seen against the background of featureless sky. Decision-making based on these observations is even more complicated. It requires integrating diverse sensor data on hardware far less capable than a human brain, and accounting for large amounts of uncertainty over large planning horizons. Accurately inferring the consequences of various decisions using simulations, a common approach in modern RL, is thwarted under these conditions by the lack of onboard compute and energy to run it.

Figure 3: (Left) A schematic depiction of air movement within thermals and a sailplane's trajectory. (Right) A visualization of an actual thermal soaring trajectory from one of our sUAVs’ flights.

Figure 3: (Left) A schematic depiction of air movement within thermals and a sailplane’s trajectory. (Right) A visualization of an actual thermal soaring trajectory from one of our sUAVs’ flights.

Our first steps have focused on using thermals to gain altitude:

  • Our RSS-2018 paper was the first autonomous soaring work to deploy an RL algorithm for exploiting thermals aboard an actual sailplane sUAV, as opposed to simulation. It also showed RL’s advantage at this task over a strong baseline algorithm based on control and replanning, an instance of a class of autonomous thermaling approaches predominant in prior work, in a series of field tests. Our Bayesian RL algorithm POMDSoar deliberatively plans learning about the environment and exploiting the acquired knowledge. This property gives it an edge over more traditional soaring controllers that update their thermal model and adjust their thermaling strategy as they gather more data about the environment, but don’t take intentional steps to optimize the information gathering.
  • Our IROS-2018 paper studied ArduSoar, a control-based thermaling strategy. We have found it to perform very well given its approach that plans based on the current most likely thermal model. As a simple, robust soaring controller, ArduSoar has been integrated into ArduPlane, a major open-source autopilot for fixed-wing drones.

Figure 4: An animated 3D visualization of a real simultaneous flight of two motorized Radian Pro sailplanes, one running ArduSoar and another running POMDSoar. Use the mouse to change the viewing angle, zoom, and replay speed. At the end, one of the Radians can be seen engaging in low-altitude orographic soaring near a tree line, getting blown by a wind gust into a tree, and becoming stuck there roughly 35 meters above the ground – a reality of drone testing in the field. After some time, the Radian was retrieved from a nearby swamp and repaired. It flies to this day.

We released both POMDSoar and ArduSoar as part of Frigatebird autopilot on Github, which is based on a fork of ArduPlane.

On a wing and a simulator

Although Project Frigatebird’s goal is to take RL beyond simulated settings, simulations play a central role in the project. While working on POMDSoar and ArduSoar, we saved a lot of time by evaluating our ideas on a simulator in the lab before doing field tests. Besides saving time, simulators allow us to do crucial experiments that would be very difficult to do logistically in the field. This applies primarily to long-distance navigation, where simulation lets us learn and assess strategies over multi-kilometer distances over various types of terrain, conditions we don’t have easy access to in reality.

Figure 5: Software-in-the-loop simulation in Silent Wings. A Frigatebird-controlled LS-8b sailplane is trying to catch a thermal where another sailplane is already soaring on a windy day near Starmoen, Norway. For debugging convenience, Silent Wings indicates the centers of thermals and ridge lift, which are invisible in reality, with red arrows (this visualization can be disabled).

Figure 5: Software-in-the-loop simulation in Silent Wings. A Frigatebird-controlled LS-8b sailplane is trying to catch a thermal where another sailplane is already soaring on a windy day near Starmoen, Norway. For debugging convenience, Silent Wings indicates the centers of thermals and ridge lift, which are invisible in reality, with red arrows (this visualization can be disabled).

To facilitate such experimentation for other researchers, we released a software-in-the-loop (SITL) integration between Frigatebird and a soaring flight simulator, Silent Wings. Silent Wings is renowned for the fidelity of its soaring flight experience. Importantly for experiments like ours, it provides the most accurate modelling of the distribution of thermals and ridge lift across the natural landscape as a function of terrain features, time, and environmental conditions that we’ve encountered in any simulator. This gives us confidence that Silent Wings’ evaluation of long-range navigation strategies, which critically rely on these distributions, will yield qualitatively similar results to what we will see during field experiments.

Flight plan

Sensors let sailplane sUAVs reliably recognize when they are flying through a thermal, and techniques like POMDSoar let them soar higher, even in the weak turbulent thermals found at lower altitudes. However, without the ability to predict from a distance where thermals are, the sailplane drones can’t devise a robust navigation strategy from point A to point B. To address this problem, in partnership with scientists from ETH Zurich’s Autonomous Systems Lab, we are researching remote thermal prediction and its integration with motion planning.

Thermals appear due to warmer parts of the ground heating up the air above them and forcing it rise. Our joint efforts with ETH Zurich’s team focus on detecting the temperature differences that cause this process, as well as other useful features from a distance, using infrared and optical cameras mounted on the sailplane, and forecasting thermal locations from them (see Figure 6.) However, infrared cameras cannot “see” such minute temperature variations in the air, and not every warm patch on the ground gives rise to a thermal, making this a hard but exciting problem. Integrating the resulting predictions with reinforcement learning for motion planning raises research challenges of its own due to the uncertainty in the predictions and difficulties in field evaluation of this approach.

Figure 6: A schematic of a sailplane predicting thermal locations in front of itself by mapping the terrain with infrared and optical cameras. Image provided by ETH Zurich’s Autonomous Systems Lab.

Figure 6: A schematic of a sailplane predicting thermal locations in front of itself by mapping the terrain with infrared and optical cameras. Image provided by ETH Zurich’s Autonomous Systems Lab.

Crew

Building intelligence for a robotic platform that critically relies on, not merely copes with, highly variable atmospheric phenomena outdoors so that it can soar as well as the best soarers – birds! – takes expertise far beyond AI itself. To achieve our dream, we have been collaborating with experts from all over the world. Iain Guilliard, a Ph.D. student from the Australian National University and a former intern at Microsoft Research, has been the driving force behind POMDSoar. Samuel Tabor, a UK-based autonomous soaring enthusiast, has developed the alternative control-based ArduSoar approach and helped build the software-in-the-loop integration for Silent Wings. The Frigatebird autopilot, which includes POMDSoar and ArduSoar, is based on the ArduPlane open-source project and on feedback from the international community of its developers. We are researching infrared/optical vision-aided thermal prediction with our partners Nicholas Lawrance, Jen Jen Chung, Timo Hinzmann, and Florian Achermann at ETH Zurich’s Autonomous Systems Lab led by Roland Siegwart. The know-how of all these people augments our project team’s in-house expertise in automatic sequential decision-making, robotics/vision (Debadeepta Dey), and soaring (Rick Rogahn).

Read more…
3D Robotics

5334392480?profile=original

Hey, all 91,000 of you! After way too many years of an old-skool design here based on the original Ning network template of 2007, I'm thinking it's time to upgrade to the more modern Ning 3.0 service. It should be much more usable on mobile, big screens and otherwise the responsive, social-friendly platform you'd expect today.  It's also an opportunity to clean up the site without losing more than 13 years of content (which is a lot! Millions of pages...). 

I've set up a staging site where we can see and refine this site on Ning 3.0 before making the formal switchover.  It's here.

Would you like to help me update the site for a new decade? If so, please PM me and I'll give you edit access to the new site.

Thanks!

Chris

Read more…
3D Robotics

Microsoft AirSim drone racing at NeuroIPS

5334386873?profile=original

From the Microsoft Research Team:

Drone racing has transformed from a niche activity sparked by enthusiastic hobbyists to an internationally televised sport. In parallel, computer vision and machine learning are making rapid progress, along with advances in agile trajectory planning, control, and state estimation for quadcopters. These advances enable increased autonomy and reliability for drones. More recently, the unmanned aerial vehicle (UAV) research community has begun to tackle the drone-racing problem. This has given rise to competitions, with the goal of beating human performance in drone racing.

At the thirty-third Conference on Neural Information Processing Systems (NeurIPS 2019), the AirSim research team is working together with Stanford University and University of Zurich to further democratize drone-racing research by hosting a simulation-based competition, Game of Drones. We are hosting the competition on Microsoft AirSim, our Unreal Engine-based simulator for multirotors. The competition focuses on trajectory planning and control, computer vision, and opponent drone avoidance. This is achieved via three tiers:

  • Tier 1  Planning only: The participant’s drone races tête-à-tête with a Microsoft Research opponent racer. The goal is to go through all gates in the minimum possible time, without hitting the opponent drone. Ground truth for gate poses, the opponent drone pose, and the participant drone are provided. These are accessible via our application-programming interfaces (APIs). The opponent racer follows a minimum jerk trajectory, which goes through randomized waypoints selected in each gate’s cross section.
  • Tier 2  Perception only: This is a time trial format where the participants are provided with noisy gate poses. There’s no opponent drone. The next gate will not always be in view, but the noisy pose returned by our API will steer the drone roughly in the right direction, after which vision-based control would be necessary.
  • Tier 3 – Perception and Planning: This combines Tier 1 and 2. Given the ground truth state estimate for participant drone and noisy estimate for gates, the goal is to race against the opponent racer without colliding with it.

The animation on the left below shows the ground truth gate poses (Tier 1), while the animation on the right shows the noisy gate poses (Tier 2 and Tier 3). In each animation, the drone is tracking a minimum jerk trajectory using one of our competition APIs.

Image shows the ground truth gate poses

 

The following animation shows a segment of one of our racing tracks with two drones racing against each other. Here “drone_2” (pink spline) is the opponent racer going through randomized waypoints in each gate cross section, while “drone_1” (yellow spline) is a representative competitor going through the gate centers.

This animation shows a segment of one of our racing tracks with two drones racing against each other

The competition is being run in two stages—an initial qualification round and a final round. A set of training binaries with configurable racetracks was made available to the participants initially, for prototyping and verification of algorithms on arbitrary racetracks. In the qualification stage (Oct 15th to Nov 21st), teams were asked to submit their entries for a subset or all of the three competition tiers.  117 teams registered for the competition worldwide, with 16 unique entries that have shown up on the qualification leaderboard.

We are now running the final round of the competition and the corresponding leaderboard is available here. All of the information for the competition is available at our GitHub repository, along with the training, qualification, and final race environments.

Engineering-wise, we introduced some new APIs in AirSim specifically for the competition, and we’re continually adding more features as we get feedback. We highlight the main components below:

In the long term, we intend to keep the competition open, and we will be adding more racing environments after NeurIPS 2019. While the first iteration brought an array of new features to AirSim, there are still many essential ingredients for trustable autonomy in real-world scenarios and effective simulation-to-reality transfer of learned policies. These include reliable state estimation; camera sensor models and motion blur; robustness to environmental conditions like weather, brightness, and diversity in texture and shape of the drone racing gates; and robustness against dynamics of the quadcopter. Over the next iterations, we aim to extend the competition to focus on these components of autonomy as well.

For more of the exciting work Microsoft is doing with AirSim, see our blog post on Ignite 2019.

Read more…
3D Robotics

Evolution of solar-powered drones

From Hackaday:

Many of us have projects that end up spanning multiple years and multiple iterations, and gets revisited every time inspiration strikes and you’ve forgotten just how much work and frustration the previous round was. For [Daniel Riley] AKA [rctestflight] that project is a solar powered RC plane which to date spans 4 years, 4 versions and 13 videos. It is a treasure trove of information collected through hard experience, covering carbon fibre construction techniques, solar power management and the challenges of testing in the real world, among others.

Solar Plane V1 had a 9.5 ft / 2.9 m carbon fibre skeleton wing, covered with transparent film, with the fragile monocrystaline solar cells mounted inside the wing. V1 experienced multiple crashes which shattered all the solar cells, until [Daniel] discovered that the wing flexed under aileron input. It also did not have any form of solar charge control. V2 added a second wing spar to a slightly longer 9.83 ft / 3 m wing, which allowed for more solar cells.

Solar Plane V3 was upgraded to use a single hexagonal spar to save weight while still keeping stiff, and the solar cells were more durable and efficient. [Daniel] did a lot of testing to find an optimal solar charging set-up and found that using the solar array to charge the batteries directly in a well-balanced system actually works equally well or better than an MPPT charge controller.

V4 is a departure from the complicated carbon fibre design, and uses a simple foam board flying wing with a stepped KF airfoil instead. The craft is much smaller with only a 6 ft / 1.83 m wingspan. It performed exceptionally well, keeping the battery fully charged during the entire flight, which unfortunately ended in a crash after adjusting the autopilot. [Daniel] suspects the main reasons for the improved performance are higher quality solar panels and the fact that there is no longer film covering the cells.

Read more…
3D Robotics

From Hackaday:

There’s nothing quite like the sight of a plastic box merrily sailing its way around a lake to symbolise how easy it is to get started in autonomous robotics. This isn’t a project we’re writing about because of technical excellence, but purely because watching an autonomous tupperware box navigate a lake by itself is surprisingly compelling viewing. The reason that [rctestflight] built the vessel was to test out the capabilities of ArduRover. ArduRover is, of course, a flavour of the extremely popular open source ArduPilot, and in this case is running on a Pixhawk.

The hardware itself is deliberately as simple as possible: two small motors with RC car ESCs, a GPS, some power management and a telemetry module are all it takes. The telemetry module allows the course/mission to be updated on the fly, as well as sending diagnostic data back home. Initially, this setup performed poorly; low GPS accuracy combined with a high frequency control loop piloting a device with little inertia lead to a very erratic path. But after applying some filtering to the GPS this improved significantly.

Despite the simplicity of the setup, it wasn’t immune to flaws. Seaweed in the prop was a cause of some stressful viewing, not to mention the lack of power required to sail against the wind. After these problems caused the boat to drift off course past a nearby pontoon, public sightings ranged from an illegal police drone to a dog with lights on its head.

If you want to use your autonomous boat for other purposes than scaring the public, we’ve written about vessels that have been used to map the depth of the sea bedtrack aircraft, and even cross the Atlantic.

Read more…
3D Robotics

Swarming Solos

3689741241?profile=original

Collaboration of Rajant, a mesh networking company, and the Norwegian military. BBC story here;

Scientists from the Norwegian Defence Research Establishment (FFI) and the US's Rajant Corporation are working on simultaneously flying about 20 drones that can work in co-ordination with little human supervision.

A Rajant-patented radio technology called "kinetic mesh" and "foreign function interface" distributed computing software are the technological ingredients behind this breakthrough.

3689741171?profile=original

Read more…