Chris Anderson's Posts (2718)

Sort by
3D Robotics

LIDAR on a chip

3689697767?profile=originalFrom IEEE Spectrum:

Lidar systems measure how far away each pixel in a 3D space is from the emitting device, as well as the direction to that pixel, which allows for the creation of a full 3D model of the world around the sensor. The basic method of operation of a lidar system is to transmit a beam of light, and then measure the returning signal when the light reflects off of an object. The time that the reflected signal takes to come back to the lidar module provides a direct measurement of the distance to the object. Additional information about the object, like its velocity or material composition, can also be determined by measuring certain properties of the reflected signal, such as the induced Doppler shift. Finally, by steering this transmitted light, many different points of an environment can be measured to create a full 3D model.

imgPhoto: Evan Ackerman/IEEE SpectrumVelodyne HDL-64 lidar mounted on a self-driving car. The device uses a laser/receiver module that is mechanically spun around, an approach that limits the scan rate of the system while increasing size, complexity, and cost.

Most lidar systems—like the ones commonly seen on autonomous vehicles—use discrete free-space optical components like lasers, lenses, and external receivers. In order to have a useful field of view, this laser/receiver module is mechanically spun around, often while being oscillated up and down. This mechanical apparatus limits the scan rate of the lidar system while increasing both size and complexity, leading to concerns about long-term reliability, especially in harsh environments. Today, commercially available high-end lidar systems can range from $1,000 to upwards of $70,000, which can limit their applications where cost must be minimized.

Applications such as autonomous vehicles and robotics heavily depend on lidar, and an expensive lidar module is a major obstacle to their use in commercial products. Our work at MIT’s Photonic Microsystems Group is trying to take these large, expensive, mechanical lidar systems and integrate them on a microchip that can be mass produced in commercial CMOS foundries.

Our lidar chips promise to be orders of magnitude smaller, lighter, and cheaper than lidar systems available on the market today. They also have the potential to be much more robust because of the lack of moving parts, with a non-mechanical beam steering 1,000 times faster than what is currently achieved in mechanical lidar systems.

Our lidar chips are produced on 300-millimeter wafers, making their potential production cost on the order of $10 each at production volumes of millions of units per year. These on-chip devices promise to be orders of magnitude smaller, lighter, and cheaper than lidar systems available on the market today. They also have the potential to be much more robust because of the lack of moving parts. The non-mechanical beam steering in this device is 1,000 times faster than what is currently achieved in mechanical lidar systems, and potentially allows for an even faster image scan rate. This can be useful for accurately tracking small high-speed objects that are only in the lidar’s field of view for a short amount of time, which could be important for obstacle avoidance for high-speed UAVs.

imgPhoto: Evan Ackerman/IEEE SpectrumMIT’s prototype lidar chip operating at DARPA’s Pentagon Demo Day in May 2016.

At MIT our lidar on a chip work first began with the development of 300-mm silicon photonics. Silicon photonics is a chip technology that uses silicon waveguides a few hundred nanometers in cross section to create “wires for light,” with properties similar to optical fibers except on a much smaller scale. These waveguides are then integrated into on-chip photonic circuits. An electronic analogy to silicon photonics would be something like taking discrete electrical components, such as copper wires and resistors, and integrating them onto a microchip with copper traces and nano-scale transistors.

Microelectronics, particularly CMOS technology, has allowed for much smaller and complex electronic circuits that can be mass-produced, and silicon photonics has the potential to do the same for photonics as microelectronics has done for the electronics industry. Silicon photonics can leverage the technology of commercial CMOS foundries, the same technology that develops the silicon-based microprocessors in computers, in order to be mass-produced at a very low cost. Over the past decade, several CMOS foundries have developed dedicated silicon photonics fabrication processes. Through this development process, fundamental issues such as waveguide loss and optical isolation were addressed and the technology is now at a state where complex photonic systems can be created.

imgImage: Christopher V. PoultonA scanning electron microscope image of MIT’s solid-state lidar. The device uses thermal phase shifters to heat the waveguides through which the laser propagates, changing the speed and phase of the light that passes through them. Notches fabricated in the silicon act as antennas, scattering the light into free space, and constructive interference is used to focus the beam without a need for lenses.

The Defense Advanced Research Projects Agency (DARPA) is interested in the scalability and integration of silicon photonics with electronics, and created the Electronic-Photonic Heterogeneous Integration (E-PHI) program in 2011. Two major accomplishments of this program were the first large-scale optical phased array and the first array with a wide-angle steerable beam. These devices demonstrated that practical optical phased arrays could be fabricated in commercial CMOS foundries, much like electronic phased arrays have been for decades. Electronic phased arrays have been used in radar applications for non-mechanical radio beam steering, and optical phased arrays seemed like a very elegant solution for a small, low-cost, solid-state lidar.

Our device is a 0.5 mm x 6 mm silicon photonic chip with steerable transmitting and receiving phased arrays and on-chip germanium photodetectors. The laser itself is not part of these particular chips, but our group and others have demonstrated on-chip lasers that can be integrated in the future. In order to steer the laser beam to detect objects across the LIDAR’s entire field of view, the phase of each antenna must be controlled. In this device iteration, thermal phase shifters directly heat the waveguides through which the laser propagates. The index of refraction of silicon depends on its temperature, which changes the speed and phase of the light that passes through it. As the laser passes through the waveguide, it encounters a notch fabricated in the silicon, which acts as an antenna, scattering the light out of the waveguide and into free space. Each antenna has its own emission pattern, and where all of the emission patterns constructively interfere, a focused beam is created without a need for lenses.

imgImage: Christopher V. PoultonOptical micrograph shows MIT’s solid-state lidar, a 0.5 mm x 6 mm silicon photonic chip with steerable transmitting and receiving phased arrays and on-chip germanium photodetectors.

The current steering range of the beam is about 51°, limited by the spacing between the antennas. Reducing this spacing becomes challenging because there is a limitation to how small silicon waveguides can be while still confining light adequately, although our technology should support near 100° steering. Even with a limited steering range, one could imagine placing multiple lidar sensors on a vehicle in order to get a full 360° image. 
The detection method in our lidar is based on a coherent method instead of direct time-of-flight measurement, where the system only reacts to the light that was originally transmitted by the device. This reduces the effect of sunlight that can be a large noise factor in lidar systems, and allows for modest photodetectors instead of expensive avalanche photodetectors or photo-multiplier tubes that are challenging and expensive to integrate in a silicon photonics platform. 

At the moment, our on-chip lidar system can detect objects at ranges of up to 2 meters, though we hope to achieve a 10-meter range within a year. The minimum range is around 5 centimeters. We have demonstrated centimeter longitudinal resolution and expect 3-cm lateral resolution at 2 meters. There is a clear development path towards lidar on a chip technology that can reach 100 meters, with the possibility of going even farther.

On-chip lidar systems could even be placed in the fingers of a robot, allowing it to see what it is grasping. These developments have the potential to dramatically alter the landscape of lidar systems by changing how the devices operate and opening up the technology to numerous new applications.

Using other materials in the chip (such as silicon nitride) will allow for an increase in power output of two to three orders of magnitude. Our fabrication process includes silicon nitride layers along with silicon that allows for systems utilizing both. Additionally, a larger phased array would allow for less diffraction (spreading out) of the beam, resulting in longer ranging and higher lateral resolution. The challenge here is how uniform and precise the silicon waveguides and antennas can be fabricated, and this capability will most likely increase in the future as lithography technologies improve. Though we have had promising results in creating very-large scale phased arrays for lidar applications, the question of how large they can be reliably fabricated is still unknown, and will most likely be the limiting factor of the range of this technology in the future.

DARPA has recently created a follow-up program called Modular Optical Aperture Building Blocks (MOABB), which is focused on extending this silicon photonic lidar work in the coming years. Though the MOABB program is not a part of our academic research group, after the lidar effort of the E-PHI program ends, we plan to extend our phased array work to free-space communications to allow for multiple photonic chips to interface with each other with >40Gb/s data rates. We are also developing visible light phased arrays with applications such as Li-Fi and holography that can be seen by the human eye.

We believe that commercial lidar-on-a-chip solutions will be available in a few years. A low-cost, low-profile lidar system such as this has many applications in autonomous vehicles and robotics. It would allow for multiple inexpensive lidar modules to be placed around a car or robot. These on-chip lidar systems could even be placed in the fingers of a robot to see what it is grasping because of their high resolution, small form factor, and low cost. These developments have the potential to dramatically alter the landscape of lidar systems by changing how the devices operate and opening up the technology to numerous new applications, some of which have not even been thought of today. 

Christopher V. Poulton is a PhD student investigating optical phased arrays and their applications in lidar, free-space communication, and more at MIT’s Photonic Microsystems Group led by Prof. Michael Watts. He is currently the lead photonic researcher on the phased array effort of the DARPA E-PHI program and a DARPA Riser.

Professor Michael R. Watts is a principal investigator at the Research Laboratory of Electronics and a member of the electrical engineering and computer science department at MIT. He is currently the principal investigator on the MIT DARPA E-PHI and DODOS programs and was recently named the CTO of AIM Photonics.

Read more…
3D Robotics

QGroundControl 3.0 released

7a4cfebc05d1bd88148433c16ab809335702e25a_1_690x431.png

From the APM discussion group:

We are happy to announce the release of QGroundControl 3.0. QGroundControl provides full flight control, mission planning and configuration for ArduPilot or PX4 Pro powered vehicles. It is available for Windows, Linux, OS X, as well as Android devices (tablet and phone). Also with the release of QGC 3.0 comes beta availability of iOS support. The goal for QGroundControl is improved ease of use for new users as well as high end feature support for experienced users. Download links and user guide can be found here:http://qgroundcontrol.com

Screenshots from Android version:

Read more…
3D Robotics

3689697166?profile=originalCool report this morning from NBC's Today Show: Microsoft's Project Premonition is using 3DR Solos to find mosquito breeding areas, then putting "smart traps" there. Those traps (shown below) can analyze the DNA of the mosquito to see if they're carrying the Zika virus.  This already deployed in Houston, Texas, where the first Zika transmissions have been reported in the US.

3689697142?profile=original

Watch the whole segment here

Read more…
3D Robotics

Swarming with Solos and ROS

I've posted a bunch of teaser videos of the Solo swarming we do often at 3DR, but have not yet posted instructions on how to do it yourself -- I was just waiting for the team to document the steps. Well good news: now they have!

The above video (sorry for the shaky camera -- we were excited) shows a fun exercise in in using the Solo drones to autonomously play Pong with themselves. Each "paddle" is made up of two Solos. The "ball" is the fifth. All this is automatically controlled by ROS, running on the ground. Instructions on how to do this are in the documents below.

Instructions:

Enjoy!

Read more…
3D Robotics

I've backed it. Similar concept to what we're doing at go-kart scale, but without the GPS and other big-car stuff. On Kickstarter now

Formula Pi is an exciting new race series and club designed to get people started with self-driving robotics. The aim is to give people with little hardware or software experience a platform to get started and learn how autonomous vehicles work.

We provide the hardware and basic software to join the race series. Competitors can modify the software however they like, and come race day a prepared SD card with your software on it will be placed into our club robots and the race will begin!

You don't need to be in the UK, you can enter the series from anywhere in the world. Just join and send code to compete!

Formula Pi YetiBorgs on trackFormula Pi YetiBorgs on track

The Series

The series will consist of races hosted here at PiBorg. The races will be live broadcast for everyone to watch online. Races will consist of up to 23 laps (approximately 1/2km) with 5 competitors per race. The robots will be provided by PiBorg for racing.

Grid position, competitor list and club robot assignments are assigned with a random shuffle script.

Each competitor will be involved in a minimum of 5 races. We will have a complete structure once we know how many competitors are involved.

Anticipated race dates

Although these dates may change (especially due to amount of entries), we expect the following:

Winter series - from October 2016 - January 2017

Summer series - from April 2017 - July 2017

Formula Pi YetiBorgs ready to startFormula Pi YetiBorgs ready to start

The Track

The track was designed and built at the PiBorg offices. The 22.9m track has 5 corners and can be run in either direction. There is a built in timing rig and start lights to control the race. The colouring of the track helps the software identify changes in direction.

Formula Pi track #1 - PiBorgFormula Pi track #1 - PiBorg

The Robots

The YetiBorg HS Zero robots are a new design and will include our already successful ZeroBorg Kickstarter for motor control. The 2mm thick Aluminium chassis are made here at PiBorg.

YetiBorg 2mm aluminium chassisYetiBorg 2mm aluminium chassis

When entering the races you will have a YetiBorg top (called a YetiLid). This is a fibreglass lid with a microsd card slot in it to conveniently store your code when shipping your top. The YetiLids will need to display a unique number for us to identify you and you can decorate your top to make sure you can see your robot.

YetiLid - top for your YetiBorgYetiLid - top for your YetiBorg

The Software

We have written a basic control software which will be available to use. This will be available on the Formula Pi website and on github. The software can be modified by the competitors to improve performance. Or you can provide your own software if you like! It can be as simple or complex as you want.

The winners code is published at the end of the season, so everyone can start the next season from an equal standing. This helps the autonomous software improve and makes for a highly competitive sport.

History of Formula Pi

It took a lot of testing ideas and programming lots of different code before we got good results. Our very first run was less than successful! We eventually decided that if the track was painted in bright colours, the robots could dedicate more time to processing things such as the position of other robots, collision avoidance, traffic light detection and so on.

project video thumbnail

Project status

We are ready to go racing with the exception of having a fleet of performance calibrated YetiBorgs for the club robots! This Kickstarter would allow us to buy resources to build robots for competitors to use for racing. The more successful we are, the more competitors we can host.

The track has been mostly built, but is missing a timing rig. We have a laser timing rig that we developed for Pi Wars and we will be adapting this with a Pi 3 and Pi Camera to take images of the cars as they pass the start/finish line. We can also use this to check for jump starts.

We are intending to buy basic cameras to allow us to broadcast the races on a platform such as Youtube. And this way we can capture all of the action on track and record the races for you to enjoy at any time.

The software is mostly ready, we still need to do a few tweaks here and there. We are working on blog material at the same time as coding, so you can see what we have done and why.

We have a domain and website dedicated to this project at www.formulapi.com where you will find rules, code and all sorts of useful information.

Formulapi.com blogFormulapi.com blog

The Rewards

The rewards vary from race entry with a YetiLid to the YetiBorgs themselves. The YetiBorgs are high quality robots which can take the bumps and rubbing of racing.You don't need to have one to be part of the racing, but you might want one to experiment with autonomous robotics or just to drive like an RC car! They come with everything you need with the exception of a 9V battery and a Raspberry Pi Zero.

YetiBorg Raspberry Pi Autonomous robotYetiBorg Raspberry Pi Autonomous robot

They are made from 2mm thick aluminum and they have high quality metal gear motors similar to a DiddyBorg motor. They also sport the ZeroBorg motor controller we had for our last Kickstarter.

YetiBorg high quality motorYetiBorg high quality motor

The YetiBorgs are supplied with detailed build instructions and our normal high standards of PiBorg support. As always, if there are any problems with building or getting started, we're there to help on our PiBorg forum

YetiBorg build instructionsYetiBorg build instructions

This together makes an awesome Raspberry Pi Zero controlled robot!

Read more…
3D Robotics

First Pixhawk-based drone company to IPO

3689696379?profile=original

AgEagle, which makes a Pixhawk-powered plane for agriculture inspection (see above; full video below), is going public on NASDAQ. From sUASNews:

AgEagle Aerial Systems is set to list on the NASDAQ stock exchange this week Thursday, 21 July. The Neodesha, Kansas-based company is planning to raise around $15 million in its Initial Public Offering (IPO) by offering 2.7 million shares at between $5 and $6 per share and will list under the stock symbol NASDAQ:UAVS.

In 2015, according to the company’s financials filed with the US Securities and Exchange Commission (SEC), AgEagle showed an annual net loss of $1.4 million on revenues of $774,000 after showing a profit of $200,000 on higher revenues of $885,000 the year before.

AgEagle designs and manufactures data-acquisition drones for precision agriculture and according to AgEagle’s website,

“AgEagle was a cooperative effort between founder Bret Chilcott and Kansas State University.  In 2011, K-State was working to merge small radio controlled airplanes and near infrared photo image technology to determine crop health.  Bret and the professors soon realized that farmers would benefit from this effort and AgEagle was created to serve that market.

By 2012, Bret completed the first prototype of AgEagle and took the prototype throughout the corn belt meeting with farmers and agronomists to see if they thought the AgEagle could help them. Interest was extremely high and by the fall of 2013, the first AgEagle drone was sold. Since that time, the AgEagle company has continually grown with dealers now across the United States.  Scores of AgEagles have been sold in the US, Australia, Canada and Brazil.”

Read more…
3D Robotics

3689696333?profile=originalFull disclosure: I was one of the judges. Too many great entries -- it was hard to pick a winner. From Popular Science:

Zelator, by Alexey Medvedev of Omsk, Russia, which won first place in the Airbus Main Prize.

The contest, which got underway this spring, ultimately netted 425 submissions.

The drones had a length list of requirements, including a weight below 55 pounds, the ability to take off and land vertically, and a pusher propeller.

There were nine winning designs (three places in each of three categories), and in total they were awarded over $100,000.

The Zelator entry can be viewed here. It features a sleek cargo compartment, a powerful engine for forward thrust in flight, and four smaller rotors to provide vertical lift.

The SkyPac drone designed by Finn Yonkers of North Kingstown, Rhode Island won the cargo category. SkyPac features a versatile tubular body that can fit many different loads for many missions, as designed, including dropping life preservers for sea rescue. Finally, Frédéric Le Sciellour of Pont De L’Arn, France won the community category with his slick Thunderbird design, a very horizontal craft with a hidden storage compartment in the main body.

Read more…
3D Robotics

3689695607?profile=originalHere's a primer on custom geofencing with Solo, which has never been easier. It has just been released in the new 3DR Solo app for iOS and will be coming to Android soon: (Correction: all the other features in the latest Solo iOS app, such as rewind, are coming to Android in the next month or so, but we do not have an Android ETA for geofencing yet. Sorry for the confusion on that -- I read the internal roadmap too fast and missed that)

On your map view, you can enable or disable geofencing. When it’s on, you will have four points (like dropped pins) that you use to make a virtual quadrilateral flight cage around your drone. Solo uses GPS to set and obey this boundary. You can move your four points to change the shape, size and location of your quadrilateral at any time in flight, blocking off any objects or areas you choose. This also means that if you want to fly in another area, simply move your pins on your screen and you’ll create a new safe zone — or you can turn the geofence off altogether.

Why this is important:

  • It works.
  • Safe: Because it’s a hard fence, it really does keep your drone away from objects, even power lines.
  • Customizable: If you don’t want that hard fence there, you can move it to where you want it, or easily disable or enable it.
  • Engaging: Perhaps the most important advantage. Instead of stopping you from thinking about the environment around you, like hardware sensors would do, geofencing keeps pilots engaged and aware. This level of interaction with your drone and with your environment is critical for any truly holistic approach to drone safety, as opposed to blindly trusting an imperfect technology.
Read more…
3D Robotics

3689695626?profile=original

It's customary and traditional that we celebrate the addition of every 1,000 new members here and share the traffic stats. This time it's 79,000!

Despite the site being on-and-off down for nearly a week over the past two month as you can see on the right side of the traffic graph above  (the fault of Ning, our hosting provider -- you can see the sorry story of their tech issues here), we still had more than a million page views this month.  

We've known we were going to have to eventually going to shift off Ning, but it looks like the time is now. I'm been evaluating Jamroom, which has a special service to migrate Ning networks, and so far it looks good. Anybody have experience with Jamroom and would like to help with the migration?

Read more…
3D Robotics

Smart RTL comes to PX4

When it's time for a drone to fly itself home, we for years settled for a a straight line back, "as the crow flies". That's obviously inadequate, since there could easily be a tree or large building in the way. So 3DR improved that with Solo, with the "Rewind" feature, which finds the shortest path back through the known-good area that the drone has already flown through. 

Now that same technology is coming to the open source PX4 flight code. A preview is shown above. Smart.

Read more…
3D Robotics

BAE wants to grow drones in vats

From BAE, the ultimate DIY Drone:

Ahead of this years' Farnborough International Airshow, engineers and scientists at BAE Systems and the University of Glasgow have outlined their current thinking about military aircraft and how they might be designed and manufactured in the future.

The concepts have been developed collaboratively as part of BAE Systems' 'open innovation' approach to sharing technology and scientific ideas which sees large and established companies working with academia and small technology start-ups.
During this century, the scientists and engineers envisage that small Unmanned Air Vehicles (UAVs) bespoke to specific military operations, could be 'grown' in large-scale labs through chemistry, speeding up evolutionary processes and creating bespoke aircraft in weeks, rather than years.
 
A radical new machine called a Chemputer™ could enable advanced chemical processes to grow aircraft and some of their complex electronic systems, conceivably from a molecular level upwards. This unique UK technology could use environmentally sustainable materials and support military operations where a multitude of small UAVs with a combination of technologies serving a specific purpose might be needed quickly. It could also be used to produce multi-functional parts for large manned aircraft.
 
 
Flying at such speeds and high altitude would allow them to outpace adversary missiles. The aircraft could perform a variety of missions where a rapid response is needed. These include deploying emergency supplies for Special Forces inside enemy territory using a sophisticated release system and deploying small surveillance aircraft.
 
“The world of military and civil aircraft is constantly evolving and it's been exciting to work with scientists and engineers outside BAE Systems and to consider how some unique British technologies could tackle the military threats of the future” said Professor Nick Colosimo, a BAE Systems Global Engineering Fellow.
 
Regius Professor Lee Cronin at the University of Glasgow, and Founding Scientific Director at Cronin Group PLC – who is developing the Chemputer™ added; ‘This is a very exciting time in the development of chemistry. We have been developing routes to digitize synthetic and materials chemistry and at some point in the future hope to assemble complex objects in a machine from the bottom up, or with minimal human assistance. Creating small aircraft would be very challenging but I’m confident that creative thinking and convergent digital technologies will eventually lead to the digital programming of complex chemical and material systems.’
 
Read more…
3D Robotics

3689695492?profile=original

Native support for the uAvionix PingX ADS-B receiver is now available in beta of the 3.4 code for Copter, Plane and Helicopter.  Below is the procedure for loading the firmware on your Pixhawk/APM autopilot to visualize ADS-B compliant aircraft in your operational vicinity.

  • Verify that you are running the latest version of Mission Planner
  • Open Mission Planner and click “Install Firmware” from the “Initial Setup” window
  • Use the Hot Key command “Ctrl Q” to access Beta/Trunk firmware (V3.4dev) and click load
  • Once the firmware update is complete, power cycle the autopilot
  • Verify a successful update by viewing “Flight Data–Message” window located in the bottom left corner of Mission Planner
  • Set ADSB_ENABLE to “1” in the Mission Planner parameters list
  • The default plug-in on the Pixhawk autopilot for pingRX remains TELEM2
Read more…
3D Robotics

3689695506?profile=original

Just for the next day -- get it while you can!

 

Comes with the latest 2.4 software with such features as Smart Rewind RTL, Panos, and Ziplines (on iOS now, and Android catching up to latest features next month)

.

The version without gimbal is a great and cheap open source development platform. We use dozens of them for swarming (instructions coming soon) 

Read more…
3D Robotics

3689695242?profile=original

From the University of Cincinnati:

Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by subject-matter expert and retired United States Air Force Colonel Gene Lee — who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise — in a high-fidelity air combat simulator.

The artificial intelligence, dubbed ALPHA, was the victor in that simulated scenario, and according to Lee, is “the most aggressive, responsive, dynamic and credible AI I’ve seen to date.”

Details on ALPHA – a significant breakthrough in the application of what’s called genetic-fuzzy systems are published in the most-recent issue of the Journal of Defense Management, as this application is specifically designed for use with Unmanned Combat Aerial Vehicles (UCAVs) in simulated air-combat missions for research purposes.

The tools used to create ALPHA as well as the ALPHA project have been developed by Psibernetix, Inc., recently founded by UC College of Engineering and Applied Science 2015 doctoral graduate Nick Ernest, now president and CEO of the firm; as well as David Carroll, programming lead, Psibernetix, Inc.; with supporting technologies and research from Gene Lee; Kelly Cohen, UC aerospace professor; Tim Arnett, UC aerospace doctoral student; and Air Force Research Laboratory sponsors.

Gene Lee in flight simulator
Retired United States Air Force Colonel Gene Lee, in a flight simulator, takes part in simulated air combat versus artificial intelligence technology developed by a team comprised of industry, U.S. Air Force and University of Cincinnati representatives.*

High pressure and fast pace: An artificial intelligence sparring partner

ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment. In its earliest iterations, ALPHA consistently outperformed a baseline computer program previously used by the Air Force Research Lab for research.  In other words, it defeated other AI opponents.

In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.

Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.

Lee, who has been flying in simulators against AI opponents since the early 1980s, said of that first encounter against ALPHA, “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”

He added that with most AIs, “an experienced pilot can beat up on it (the AI) if you know what you’re doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.”

But, now, it’s been Lee, who has trained with thousands of U.S. Air Force pilots, flown in several fighter aircraft and graduated from the U.S. Fighter Weapons School (the equivalent of earning an advanced degree in air combat tactics and strategy), as well as other pilots who have been feeling pressured by ALPHA.

And, anymore, when Lee flies against ALPHA in hours-long sessions that mimic real missions, “I go home feeling washed out. I’m tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge.”

 

Nick Ernest, David Carroll and Gene Lee in flight simulator.

From left, UC graduate and Psibernetix President and CEO Nick Ernest; David Carroll of Psibernetix; and retired U.S. Air Force Colonel Gene Lee in an air-combat simulator.*

An artificial intelligence wingman: How an AI combat role might develop

Explained Ernest, “ALPHA is already a deadly opponent to face in these simulated environments. The goal is to continue developing ALPHA, to push and extend its capabilities, and perform additional testing against other trained pilots. Fidelity also needs to be increased, which will come in the form of even more realistic aerodynamic and sensor models. ALPHA is fully able to accommodate these additions, and we at Psibernetix look forward to continuing development."

In the long term, teaming artificial intelligence with U.S. air capabilities will represent a revolutionary leap. Air combat as it is performed today by human pilots is a highly dynamic application of aerospace physics, skill, art, and intuition to maneuver a fighter aircraft and missiles against adversaries, all moving at very high speeds. After all, today’s fighters close in on each other at speeds in excess of 1,500 miles per hour while flying at altitudes above 40,000 feet. Microseconds matter, and the cost for a mistake is very high.

Eventually, ALPHA aims to lessen the likelihood of mistakes since its operations already occur significantly faster than do those of other language-based consumer product programming. In fact, ALPHA can take in the entirety of sensor data, organize it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond. Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.

So it’s likely that future air combat, requiring reaction times that surpass human capabilities, will integrate AI wingmen – Unmanned Combat Aerial Vehicles (UCAVs) – capable of performing air combat and teamed with manned aircraft wherein an onboard battle management system would be able to process situational awareness, determine reactions, select tactics, manage weapons use and more. So, AI like ALPHA could simultaneously evade dozens of hostile missiles, take accurate shots at multiple targets, coordinate actions of squad mates, and record and learn from observations of enemy tactics and capabilities.

UC’s Cohen added, “ALPHA would be an extremely easy AI to cooperate with and have as a teammate. ALPHA could continuously determine the optimal ways to perform tasks commanded by its manned wingman, as well as provide tactical and situational advice to the rest of its flight.”

 

 

A programming victory: Low computing power, high-performance results

It would normally be expected that an artificial intelligence with the learning and performance capabilities of ALPHA, applicable to incredibly complex problems, would require a super computer in order to operate.

However, ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time and quickly react and respond to uncertainty and random events or scenarios.

According to a lead engineer for autonomy at AFRL, "ALPHA shows incredible potential, with a combination of high performance and low computational cost that is a critical enabling capability for complex coordinated operations by teams of unmanned aircraft."

Ernest began working with UC engineering faculty member Cohen to resolve that computing-power challenge about three years ago while a doctoral student. (Ernest also earned his UC undergraduate degree in aerospace engineering and engineering mechanics in 2011 and his UC master’s, also in aerospace engineering and engineering mechanics, in 2012.)

They tackled the problem using language-based control (vs. numeric based) and using what’s called a “Genetic Fuzzy Tree” (GFT) system, a subtype of what’s known as fuzzy logic algorithms.

States UC’s Cohen, “Genetic fuzzy systems have been shown to have high performance, and a problem with four or five inputs can be solved handily. However, boost that to a hundred inputs, and no computing system on planet Earth could currently solve the processing challenge involved – unless that challenge and all those inputs are broken down into a cascade of sub decisions.”

That’s where the Genetic Fuzzy Tree system and Cohen and Ernest’s years’ worth of work come in.

According to Ernest, “The easiest way I can describe the Genetic Fuzzy Tree system is that it’s more like how humans approach problems.  Take for example a football receiver evaluating how to adjust what he does based upon the cornerback covering him. The receiver doesn’t think to himself: ‘During this season, this cornerback covering me has had three interceptions, 12 average return yards after interceptions, two forced fumbles, a 4.35 second 40-yard dash, 73 tackles, 14 assisted tackles, only one pass interference, and five passes defended, is 28 years old, and it's currently 12 minutes into the third quarter, and he has seen exactly 8 minutes and 25.3 seconds of playtime.’”

That receiver – rather than standing still on the line of scrimmage before the play trying to remember all of the different specific statistics and what they mean individually and combined to how he should change his performance – would just consider the cornerback as ‘really good.’

The cornerback's historic capability wouldn’t be the only variable. Specifically, his relative height and relative speed should likely be considered as well. So, the receiver’s control decision might be as fast and simple as: ‘This cornerback is really good, a lot taller than me, but I am faster.’

At the very basic level, that’s the concept involved in terms of the distributed computing power that’s the foundation of a Genetic Fuzzy Tree system wherein, otherwise, scenarios/decision making would require too high a number of rules if done by a single controller.

Added Ernest, “Only considering the relevant variables for each sub-decision is key for us to complete complex tasks as humans. So, it makes sense to have the AI do the same thing.”

In this case, the programming involved breaking up the complex challenges and problems represented in aerial fighter deployment into many sub-decisions, thereby significantly reducing the required “space” or burden for good solutions. The branches or sub divisions of this decision-making tree consists of high-level tactics, firing, evasion and defensiveness.

That’s the “tree” part of the term “Genetic Fuzzy Tree” system.

Nick Ernest, David Carroll and Gene Lee in a flight simulator.

Standing at left is UC grad and Psibernetix President and CEO Nick Ernest. David Carroll, also of Psibernetix, is standing at right. Seated at the simulator controls is retired U.S. Air Force Colonel Gene Lee.*

Programming that's language based, genetic and generational

Most AI programming uses numeric-based control and provides very precise parameters for operations. In other words, there’s not a lot of leeway for any improvement or contextual decision making on the part of the programming.

The AI algorithms that Ernest and his team ultimately developed are language based, with if/then scenarios and rules able to encompass hundreds to thousands of variables. This language-based control or fuzzy logic, while much less about complex mathematics, can be verified and validated.

Another benefit of this linguistic control is the ease in which expert knowledge can be imparted to the system. For instance, Lee worked with Psibernetix to provide tactical and maneuverability advice which was directly plugged in to ALPHA. (That “plugging in” occurs via inputs into a fuzzy logic controller. Those inputs consist of defined terms, e.g., close vs. far in distance to a target; if/then rules related to the terms; and inputs of other rules or specifications.)

Finally, the ALPHA programming is generational. It can be improved from one generation to the next, from one version to the next. In fact, the current version of ALPHA is only that – the current version. Subsequent versions are expected to perform significantly better.

Again, from UC’s Cohen, “In a lot of ways, it’s no different than when air combat began in W.W. I. At first, there were a whole bunch of pilots. Those who survived to the end of the war were the aces. Only in this case, we’re talking about code.”

To reach its current performance level, ALPHA’s training has occurred on a $500 consumer-grade PC. This training process started with numerous and random versions of ALPHA. These automatically generated versions of ALPHA proved themselves against a manually tuned version of ALPHA. The successful strings of code are then “bred” with each other, favoring the stronger, or highest performance versions. In other words, only the best-performing code is used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.

This is the “genetic” part of the “Genetic Fuzzy Tree” system.

Said Cohen, “All of these aspects are combined, the tree cascade, the language-based programming and the generations. In terms of emulating human reasoning, I feel this is to unmanned aerial vehicles what the IBM/Deep Blue vs. Kasparov was to chess.”

Funding and support

ALPHA is developed by Psibernetix Inc., serving as a contractor to the United States Air Force Research Laboratory.

Support for Ernest’s doctoral research, $200,000 in total, was provided over three years by the Dayton Area Graduate Studies Institute and the U.S. Air Force Research Laboratory.

Read more…
3D Robotics

All you ArduRover fans out there, this is your chance to once again show your skills at AVC. It's September 17 this year. Here's the new Sparkfun announcement post:

This year, SparkFun’s Autonomous Vehicle Competition (AVC) will incorporate a few new twists. Along with our classic autonomous race course and Combat Bots, this year will feature the Power Racing Series (PRS) and a new autonomous PRS category. I’m here today to talk about the classic AVC race track.

To accommodate all of the attractions, we’ve split up our parking lot into smaller sections. So while the classic AVC course will be smaller than in previous years, it will almost certainly be more challenging. 

The track will be 10 feet wide, with hay bales along the sides. These are just hay; they’re not covered with anything. To start the race, each entrant gets 300 points, and one point will be deducted for every second that you’re navigating the course. Those deductions stop as soon as your vehicle crosses the finish line, and you can earn more points by tackling some of the obstacles along the way.

From the starting line, your vehicle will navigate a very nice and easy, 120-foot straightaway to the first right turn, followed by another 35-foot straightaway to the second right turn. Following that turn, you’ll encounter a 58-foot section with four red barrel obstacles. You can dodge them or hit them (they may or may not be easily movable), it’s up to you, but you don’t get any extra points for navigating the barrels.

alt text

But that’s a long way around there, isn’t it? That’s gonna eat some time. So maybe you want to take the optional dirt section, huh? About 30 feet from the start line, there’s a right turn onto a 7-foot-wide section of track that’s going to be covered with dirt, maybe some rocks, skulls, etc. Definitely off-road in nature. Taking this section will shave off some time if your ‘bot can hang, as it will lead you to the end of the barrel section, avoiding them entirely. It will also land you 50 extra points.

Regardless of which of those two paths you choose, your ‘bot now sits at a four-way intersection. From the barrel-straight, the easy path is to your left (or straight from the dirt section) to another 58-foot straightaway. There will be a green hoop placed in this section, and going through the hoop will net you another 10 points. At the end of that section is a right turn onto a 67-foot straightaway with no other obstacles, followed by another right turn and another 58-foot straightaway. On this section, there will be a ramp (more of a jump) that will net you 10 points if you get over it.

alt text

But again, that’s a long way around and it’s going to eat your time. So if you want to save some time, instead of taking the left turn from the barrel-straight, you can go straight (or a right turn from the dirt section). This will lead down a straight that ends with the Discombobulator.

If you don’t remember this from last year, it’s a giant gas-powered turntable that’s specifically designed to lay waste to your navigation algorithms. Taking this path will relieve you from taking the three other sections, but it can send your ‘bot flying. And if you choose to jump the Discombobulator, beware: If you jump too far, you can end up in the “Ball Pit of Despair.” This is essentially a low-edge kiddie pool filled with those big plastic balls you see at fast food chain play areas – the ones that always smell sorta funny (hey, we were going to use acetone to begin with). Landing in the Ball Pit of Despair will end your run. If you make it past the Discombobulator, you’ll get 50 more points. But just to show you that we’re nice guys, we’ll give you 10 points just for getting up the Discombobulator ramp. Who loves ya? We do.

alt text

Assuming you successfully navigate the Discombobulator, hang a right turn (or just straight from the easy path) into the last “hard” section of track. You’ll first take a right turn, then a hairpin to the left, followed by another hairpin to the right. That leads you to the final, 25-foot path to the finish. Yay! You did it!

I also need to mention that there are three weight classes this year: lightweight (<10lbs), welterweight (10-25lbs) and heavyweight (>25lbs). The high-end weight restriction is 40lbs, so don’t come with anything heavier than that. Students and veterans will run in the same heats, and registration closes August 1. Teams will be required to submit verification of progress on August 15th and September 1st, so plan for that.

So that’s it! It all sounds so easy, doesn’t it? We’ll see about that, and we’ll see you on September 17th!

Read more…
3D Robotics

Google creates its own laws of robotics

3061230-inline-i-1-the-5-laws-of-robotics-according-to-google.jpg

From Fast Company:

In his famous Robot series of stories and novels, Isaac Asimov created the fictional Laws of Robotics, which read:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Although the laws are fictional, they have become extremely influential among roboticists trying to program robots to act ethically in the human world.

Now, Google has come along with its own set of, if not laws, then guidelines on how robots should act. In a new paper called "Concrete Problems in AI Safety," Google Brain—Google's deep learning AI division—lays out five problems that need to be solved if robots are going to be a day-to-day help to mankind, and gives suggestions on how to solve them. And it does so all through the lens of an imaginary cleaning robot.

ROBOTS SHOULD NOT MAKE THINGS WORSE

Let's say, in the course of his robotic duties, your cleaning robot is tasked with moving a box from one side of the room to another. He picks up the box with his claw, then scoots in a straight line across the room, smashing over a priceless vase in the process. Sure, the robot moved the box, so it's technically accomplished its task . . . but you'd be hard-pressed to say this was the desired outcome.

A more deadly example might be a self-driving car that opted to take a shortcut through the food court of a shopping mall instead of going around. In both cases, the robot performed its task, but with extremely negative side effects. The point? Robots need to be programmed to care about more than just succeeding in their main tasks.

In the paper, Google Brain suggests that robots be programmed to understand broad categories of side effects, which will be similar across many families of robots. "For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very different, like a factory control robot, will likely want to avoid knocking over very similar objects," the researchers write.

In addition, Google Brain says that robots shouldn't be programmed to one-notedly obsess about one thing, like moving a box. Instead, their AIs should be designed with a dynamic reward system, so that cleaning a room (for example) is worth just as many "points" as not messing it up further by, say, smashing a vase.

ROBOTS SHOULDN'T CHEAT

The problem with "rewarding" an AI for work is that, like humans, they might be tempted to cheat. Take our cleaning robot again, who is tasked to straighten up the living room. It might earn a certain number of points for every object it puts in its place, which, in turn, might incentivize the robot to actually start creating messes to clean, say, by putting items away in as destructive a manner as possible.

This is extremely common in robots, Google warns, so much so it says this so-called reward hacking may be a "deep and general problem" of AIs. One possible solution to this problem is to program robots to give rewards on anticipated future states, instead of just what is happening now. For example, if you have a robot who is constantly destroying the living room to rack up cleaning points, you might reward the robot instead on the likelihood of the room being clean in a few hours time, if it continues what it is doing.

ROBOTS SHOULD LOOK TO HUMANS AS MENTORS

Our robot is now cleaning the living room without destroying anything. But even so, the way the robot cleans might not be up to its owner's standards. Some people are Marie Kondos, while others are Oscar the Grouches. How do you program a robot to learn the right way to clean the room to its owner's specifications, without a human holding its hand each time?

Google Brain thinks the answer to this problem is something called "semi-supervised reinforcement learning." It would work something like this: After a human enters the room, a robot would ask it if the room was clean. Its reward state would only trigger when the human seemed happy that the room was to their satisfaction. If not, the robot might ask a human to tidy up the room, while watching what the human did.

Over time, the robot will not only be able to learn what its specific master means by "clean," it will figure out relatively simple ways of ensuring the job gets done—for example, learning that dirt on the floor means a room is messy, even if every object is neatly arranged, or that a forgotten candy wrapper stacked on a shelf is still pretty slobby.

ROBOTS SHOULD ONLY PLAY WHERE IT'S SAFE

All robots need to be able to explore outside of their preprogrammed parameters to learn. But exploring is dangerous. For example, a cleaning robot who has realized that a muddy floor means a messy room should probably try mopping it up. But that doesn't mean if it notices there's dirt around an electrical socket it should start spraying it with Windex.

There are a number of possible approaches to this problem, Google Brain says. One is a variation of supervised reinforcement learning, in which a robot only explores new behaviors in the presence of a human, who can stop the robot if it tries anything stupid. Setting up a play area for robots where they can safely learn is another option. For example, a cleaning robot might be told it can safely try anything when tidying the living room, but not the kitchen.

ROBOTS SHOULD KNOW THEY'RE STUPID

As Socrates once said, a wise man knows that he knows nothing. That holds doubly true for robots, who need to be programmed to recognize both their own limitations and their own ignorance. The penalty is disaster.

For example, "in the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory floors could cause a lot of harm if used to clean an office," the researchers write. "Or, an office might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results." All that said, a robot can't be paralyzed totally every time it doesn't understand what's happening. Robots can always ask humans when it encounters something unexpected, but that presumes it even knows what questions to ask, and that the decision it needs to make can be delayed.

Which is why this seems to be the trickiest problem to teach robots to solve. Programming artificial intelligence is one thing. But programming robots to be intelligent about their idiocy is another thing entirely.

Read more…
3D Robotics

Real-time data analysis tool for PX4

3689694190?profile=originalMAVGAnalysis is a new tool from the PX4 team to give real-time data analytics without overcomplicating a GCS. From the repository:

This JavaFx based tool enables PX4 Users to record and analyse data published via UDP during flight or based on PX4Logs. It is not intended to replace the QGC. Runnable on OS X, Ubuntu and Windows.

Any feedback, comments and contributions are very welcome.

Status: Last updated 23/06/2016

Features:

  • Realtime data acquisition (50ms sampling)
  • Timechart annotated by messages (in 10secs framing)
  • Trigger recording manually or by selectable flight-mode/state changes
  • Choosable stop-recording delay
  • Display of key-figures during and after recording (with 'Replay')
  • Display of basic vehicle information (online), like mode, battery status, messages and sensor availability
  • XY Analysis for selected key-figures
  • MAVLink inspector
  • Easy to use parameter editor
  • Map viewer of global position and raw gps data with option to record path (cached)
  • Import of selected key-figures from PX4Log (file or last log from device via WiFi)
  • Save and load of collected data
  • FrSky Taranis USB supported in SITL
  • Low latency MJPEG based video stream display based on uv4l (recording and replay in preparation)

Requirements:

  • requires Java 8 JRE
  • A companion proxy (either MAVComm or MAVROS, not required for PIXRacer)
  • Video streaming requires uv4l running on companion

How to build on OSX(other platforms may need adjustments inbuild.xml):

How to start (all platforms):

  • Goto directory /dist

  • Start either UDP with java -jar MAVGAnalysis.jar --peerAddress=172.168.178.1

    (PX4 standard ports used, replace IP with yours)

    or java -jar MAVGAnalysis.jar --peerAddress=127.0.0.1 for SITL (jMAVSim)

    or just java -jar MAVGAnalysis.jarfor a basic demo. ​

  • Open demo_data.mgc, import PX4Log file or collect data directly from your vehicle
  • For video (mjpeg), setup uv4l at port 8080 on your companion with : ​ uv4l --auto-video_nr --sched-rr --mem-lock --driver uvc --server-option '--port=8080'

How to deploy on OSX:

  • Modify build.xml to adjust peer property.
  • Run ant_deploy

Limitations:

  • Limited to one device (MAVLink-ID '1')
  • Currently does not support USB or any serial connection (should be easy to add, so feel free to implement it)
  • PX4Log keyfigure mapping not complete (let me know, which I should add)

Note for developers:

MAVGAnalysis depends heavily on https://github.com/ecmnet/MAVComm for MAVLink parsing. ​

3689694522?profile=original3689694553?profile=original3689694548?profile=original

Read more…
3D Robotics

3689694243?profile=original

AP-Manager is a board available now that allows switching between two Pixhawks or APM autopilots:

3689694167?profile=original

Product Description

The requirements for the operation of unmanned aircrafts in commercial and semi-professional purposes have changed fundamentally in the past years in Europe and worldwide. ( FAA ) The regulations will get even more strictly in the future.

Depending on the application, the weight, the purpose, the situation, the flight area, spectators and some more reasons, SAFETY is becoming increasingly important.

Single point of failure and redundancy became the keywords for permissions and certifications from authorities and insurances.

Among other parts inside a system, redundancy of the autopilot system is meanwhile mandatory to follow the rules. Especially multicopters are very instable in their flight characteristics when it comes to malfunction of the stabilization system.

We therefore developed a redundancy circuit board which is an automatic monitoring and controlling bridge between two independent running autopilot systems, a mutual protection in case of malfunction in an autopilot system. Even if both autopilots fail (power supply, freezing controller…), the forwarding of the control signals for a manual flight of the aircraft, Delta or Helicopter is guaranteed.

This system also offers the possibility to switch manually between two different autopilot systems to test different software and sensor setups. Another benefit of the AP-Manager is that only one telemetry module is needed to monitor data of both autopilot systems (except for DJI autopilots, here you need two downlinks, from each autopilot seperate).

AP-Manager is a safe attendant for flying and testing of your unmanned aircraft.

Read more…
3D Robotics

3689694272?profile=original

From the National Geographic Society:

As a kid, National Geographic Young Explorer grantee Dominique Meyer was torn between becoming a firefighter or an archaeologist. Though his dislike for a high school history teacher led him away from archaeology and toward physics, Dominique re-entered the field as a student at the University of California, San Diego, when he was asked to help pilot autonomous drones for archaeological expeditions in Guatemala and Mexico. This work introduced him to Dominique Rissolo, Ph.D., and motivated Meyer to apply for a National Geographic Young Explorer Grant.

In September 2015, Meyer and expedition team member Rissolo embarked on an expedition to the Mexican state of Quintana Roo. Just a four-hour drive from the chaise lounges and frozen cocktails of Cancún resorts, the Southeastern Mexican state is home to dozens of ancient Maya settlements. Believing that more, never-before-documented settlements might exist in the area, Meyer and Rissolo, along with other members of the Center of Interdisciplinary Science for Art, Architecture and Archaeology (CISA3), began research in Quintana Roo. Their goal was to understand the overall distribution of Maya sites across the Yucatán Peninsula.

7f595994-9c10-416a-8f14-fd16f9fa419b.jpg?width=750

Dominique Rissolo uses a handheld GPS receiver to guide him through the forest of Quintana Roo to anomalies identified via drone- and satellite-based remote sensing imagery.

PHOTOGRAPH BY VERA SMIRNOVA

To survey Quintana Roo, the team used low-cost, autonomous fixed-wing drones. Unlike helicopter drones, which can only fly for 30 minutes at a time, fixed-wing drones have flight times of several hours. This meant the team was able to get substantial footage from fewer flights at a lower cost. Using infrared, LIDAR, and visible light, the drones gathered data that the team then correlated with satellite imagery in order to create a 3D digital elevation model.

From the digital elevation model, the team was able to pinpoint where “anomalies” exist in the landscape and target those as likely sites for undocumented Maya settlements. This approach allowed them to target the most promising areas without wasting time or resources. Their method paid off as they discovered three never-before-documented sites, two of which contained pyramids and small artifacts.

This expedition also proved that the advantages of drone technology stretch beyond reducing time and cost. In Rissolo’s words, “drones are a catalyst for collaboration.” Using drones in archaeological research has brought engineers, with the required technical skills, and archaeologists together in a new way. Rissolo believes that this collaboration benefits both fields and results in more people involved in the mission of conserving cultural heritage.

6e362cd8-c77f-402b-8060-da5c75caad36.JPG?width=750

Dominique Meyer prepares to launch a fixed-wing drone at the Maya site of Conil.

PHOTOGRAPH BY VERA SMIRNOVA

However, at the same time that drone technology has improved and become more accessible (thanks in part to 3D printers), it has also become more closely regulated by the Federal Aviation Administration. Increased regulations and restrictions threaten to disrupt the progress being made across scientific fields through drone technology.

For everyday explorers interested in using drones, Meyer cautions that “while drones are ideal platforms to collect certain types of data, they will unfortunately not solve all problems.” He recommends viewing drones as just one “tool within a toolbox,” meant to complement other methods for collecting data.

Read more…
3D Robotics

0*GtDjpnkzzMbTyAqT.jpg

Rewind and Return to Me

With the press of a button, Solo will retrace its exact path for the last 60 feet of your flight (this distance will be user-definable in the app) to ensure it avoids obstacles on its way back home. And with the new “return to me” feature, Solo will come home to wherever you stand with your controller and mobile device.

Meet the new Smart Shots — Zipline and Pano0*uig3ZMHDBUcKC8e1.jpg

Zipline + Spot Lock

Set an infinite cable in any direction, just by pushing a button. Fly up and down this line, controlling speed and direction as well as the camera. Zipline also has a “spot lock” feature, which works like an Orbit point of focus, but instead of flying in a circle you’re flying in a straight line. Set your zipline, look at something, press spot lock and Solo keeps the camera fixed on it while Solo passes, for a dramatic flyby.

Pano

The perfect aerial panorama. Set up your shot, and Solo automatically pans and snaps the right photos at the right time, for cylindrical, spherical (“little world”) or video panos.

Augmented Reality

0*22tyAUubZbBYJPt8.jpg

The Solo app will overlay visuals in real time on your mobile screen. You can see your home point, or make sure your Orbit point of focus is exactly where you want it to be before you fly your Smart Shot.

Custom Geofencing0*QPTO-BTran1irz1Y.jpg

Define a virtual fence around Solo at any time in flight. You create this virtual “safety net” by setting four points on your satellite view, which creates an area around Solo that the drone can’t leave. Set your fence before taking off, or set and adjust its location while flying, allowing you to easily block off nearby obstacles or entire areas at any time.

Read more…