Chris Anderson's Posts (2718)

Sort by
3D Robotics

3689712097?profile=originalParrot is the only publicly-listed consumer/commercial drone company, so their quarterly and annual financial reports are one of the few places to get accurate and up-to-date numbers on how the drone industry is really doing. They published their Q4 2016 and full-year 2016 financials earlier this week and as always it makes for interesting reading.

Some observations:

  • Consumer drones continue to be tough for everyone (aside from DJI) due to rapid price declines and commodification: Parrot's consumer drone sales fell by 46% in 2016.
  • Gross margins fell even faster, from 50% in Q4 2015 to 20% in Q4 2016
  • As a result, Parrot lost a lot of money in 2016: losses were $138m for the year and $45m in Q4 alone.
  • Sales are slowing at their commercial drone subsidiary, Sensefly, too, down 32% to $15m for the year.
  • But their partially-owned software subsidiary, Pix4D, had another great year, up 160% to $16m for the year.  

Parrot says that it expects 2017 to be better, in part because it will cut costs by eliminating 250 jobs and introduce new products. It will also spin off its older automotive and consumer electronic sides and become a pure-play drone company. 

Bottom line: drone hardware is a tough market, consumer drone hardware is even tougher, but the market for commercial drone software, while still young, is looking good. 

Read more…
3D Robotics

fathomnewFrom TechCrunch. It works on Raspberry Pi!

Following on the heels of their announcement a few weeks ago about their FLIR partnershipMovidius is making another pretty significant announcement regarding their Myriad 2 processor. They’ve incorporated it into a new USB device called the Fathom Neural Compute Stick.

You can plug the Fathom into any USB-capable device (computer, camera, GoPro, Raspberry Pi, Arduino, etc) and that device can become “smarter” in the sense that it can utilize the Myriad 2 processor inside of it to become an input for a neural network (I’ll come back to all this).

Essentially, it means a device with the Fathom plugged into it can react cognitively or intelligently, based on the things it sees with its camera (via computer vision) or data it processes from another source. A device using it can make its own decisions depending on its programming. The key point is it can do this all natively—right on the stick. No call to the cloud is necessary.

In addition to the stick, Movidius has also created a software system they are calling the Fathom Deep Learning Software Framework that lets you optimize and compile learning algorithms into a binary that will run on the Myriad 2 at extremely low power. In a computer vision scenario, Movidius claims it can process 16 images per second using a single watt of power at full-bore/peak performance. There are many other cognitive scenarios it can be used for though.

They have 1000 available for free to qualified customers, researchers and small companies that they’ll be making available in the coming weeks. A larger rollout is planned for Q4 that is targeting the sub $100 range for the device. So that’s the news.

The Complicated Part: What’s All This Business About Neural Networks And Algorithms?

Still, I wanted to understand how this device is used, in practical terms…to visualize where the Fathom and its software framework fit in with neural networks in an actual deployment. After struggling to grasp it for a bit (and after a few phone calls with Movidius) I finally came up with the following greatly simplified analogy.

Say you want to teach a computer system to recognize images or parts of images and react to them very quickly. For example, you want to program a drone camera to be able to recognize landing surfaces that are flat and solid versus those that are unstable.

To do this, you might build a computer system, with many, many GPUs and then use an open source software library like TensorFlow on that system to make the computer a learning system—an Artificial Neural Network. Once you have this system in place, you might begin feeding tens or even hundreds of thousands of images of acceptable landing surfaces into that learning system: flat surfaces, ship decks, driveways, mountaintops…anywhere a drone might need to land.

Over time, this large computer system begins learning and creating an algorithm to where it can begin to anticipate answers on it own, very quickly. But accessing this system from remote devices requires internet connectivity and there is some delay for a client/server transfer of information. In a situation like landing a drone, a couple of seconds could be critical.

Fathom

How the Fathom Neural Compute Stick figures into this is that the algorithmic computing power of the learning system can be optimized and output (using the Fathom software framework) into a binary that can run on the Fathom stick itself. In this way, any device that the Fathom is plugged into can have instant access to complete neural network because a version of that network is running locally on the Fathom and thus the device.

So in the previous drone example, instead of waiting for cloud calls to get landing site decisioning information, the drone could just make those decisions itself based on what it’s camera is seeing in real-time and with extremely low power consumption.

That’s sorta badass.

The Bigger Picture

If you stretch your mind a bit, you can begin to see other practical applications of miniature, low-power, cognitively capable hardware like this: intelligent flight, security cameras with situational awareness, smaller autonomous vehicles, new levels of speech recognition.

Those same size and power factors also make wearables and interactive eyewear excellent targets for use (albeit more likely in a directly integrated way rather than USB add-on). This is notable as Augmented and Mixed Reality capabilities continue to make headlines and get closer to the comfort zones of the general public.

And since Computer vision (CV) algorithms are one of the backbones that enable AR/MR to have practical uses, making CV function more powerfully and cognitively in a small footprint and at low power has possibly never been as important. I can see this kind of hardware fitting in to that possible future.

Strategically, this approach gives Movidius another way to reach customers. Obviously, they already have integrated hardware agreements with larger companies they are partnered with like Google and FLIR, but for smaller businesses that still may need onboard intelligence for their projects, releasing the Fathom as a modular, add-on opens a new market for small- or medium-sized businesses.

Read more…
3D Robotics

3689710793?profile=originalThis seems very doable -- indeed, I think an off-the-shelf product from PrecisionHawk or Agribotix can already do all of this. Am I missing something?

Challenge Overview

Land O’Lakes, Inc. is seeking proposed solutions that enable scalable, autonomous drone usage in precision agriculture (the “Solution(s)”). While drone technology is one of the hottest topics in the agriculture industry today, current solutions have not yet evolved to make drones a valuable or cost effective tool for farmers.

Land O’Lakes, Inc., a farmer-owned cooperative, is investing in the Land O’Lakes Prize (a total potential prize purse of $150,000), to be awarded to an individual or team that evolves drone technology into a valuable, user-friendly tool for farmers.

The new drone hardware and software Solutions we seek will solve critical issues for farmers. They will limit the need for human involvement in the collection of high resolution field data, decrease the time needed to access crop imagery, and improve the ability for a farmer to make decisions based on field health data. The decision-making this technology will enable has the potential to help farmers better tailor care to the specific needs of crops and that leads to potential gains in water efficiency and crop yield while reducing fertilizer waste. 

 

Plans for the winning Solution

Land O’Lakes is seeking scalable Solutions that they can offer to their members and is using a crowdsourcing challenge to find and evaluate new technologies (the “Challenge”). The competitors will retain ownership of any intellectual property contained in the proposed Solutions.

Solutions that are developed to compete for the Land O’Lakes Prize will also have the potential to be independently repurposed for use in security, inspection, livestock surveillance and other applications which require regular, repeated flights and data collection with high autonomy.   

   

Finalists and Prizes

Competitors must submit written proposals along with videos, log files and other supporting information by August 1, 2017.  Judging will determine up to three finalists (the “Finalists”) which will be invited to attend a live demonstration event at an FAA approved test location. Performance at this live demonstration event will support the determination of the Grand Prize winner, if any. If it is determined, in the judges’ sole discretion, through this demonstration that none of the Finalists have met the Criteria listed herein, no Grand Prize will be awarded..

The Challenge offers the Grand Prize winner a cash prize of $140,000 USD. For each of the Finalists not selected as the Grand Prize winner, the Challenge offers a cash prize of $5,000 USD.

 

How do I particiapte?

To be eligible to be selected as a Finalist, the Solution must, at minimum:

  • Be able to record orthorectified crop images with Red (600-720 nm) and IR (760-900 nm) spectral bands in a minimum of georeferenced .geotiff format.
  • Be capable of autonomously determining an appropriate flight path to image a field after receiving a file containing the bounds of the area to be imaged
  • Be capable of autonomously operating (take off, collect data, land, transmitting data etc.) unattended for multiple days, with multiple flights per day.
  • Utilize wireless connectivity (suitable for use in rural areas) to transmit and receive data.
  • Be able to operate in winds of up to 20mph, and use data connection to determine if weather conditions are safe for flight.

The Solution may include a variety of approaches, including, but not limited to:

  • Drone and UAV hardware (multi-copter, fixed wing, fixed wing/vertical take off hybrid, inflatable, etc.)
  • Base stations, ports, or hanger hardware to enable longer term autonomy

 

The judging panel will rank the eligible Solutions submitted against the following criteria:

Criteria

Description

Percent Importance

Imaging Capabilities

  • Record orthorectified crop images
  • Red (600-720 nm) and IR (760-900 nm) spectral bands in minimum of .geotiff format
  • Automatic image stitching with sun/brightness correction
  • Ability to store unique identifier to associate imagery with a grower/farmer

15

 

Autonomous Operation

  • Receive a file defining the bounds of the area/fields to be imaged
  • Autonomously determine a flight path to image a given area
  • Identify and avoid obstacles during flight
  • Ability to operate (take off, collect data, land, recharge, transmit data output etc.) unattended for multiple days, with multiple flights per day
  • Ability to receive/alter instructions mid flight
  • A human operator must be able to override with manual inputs at any time

35

Communications and Data Handling

  • Utilize Verizon LTE network, or other wireless connectivity (suitable for use in rural areas) to transmit and receive large amounts of data
  • Solution must be able to collect and store large amounts of collected data

15

Scalability

  • A single flying vehicle should be able to image multiple different fields (at least 3 - 70 acre fields) in a 1.5 mile radius in a 4 hour window
  • Preference will be given to Solutions able to collect data unattended from the greatest number of fields in the largest total area 

25

Safety

  • Operation in winds of up to 20mph
  • Utilize data connection and/or locally collected data to determine if weather conditions are safe for flight
  • Long intervals between needed service/failure
  • Drone must be able to receive a signal mid-flight to return to a safe location

10

Additional information may be requested before making a final selection of Finalists.

 

Additional Rules

 Participation Eligibility:

The Prize is open to individuals, age 18 or older, private teams, public teams, and collegiate teams. Individual competitors and teams may originate from any country.  Employees of Land O’Lakes, Inc., HeroX, and the Dean’s Office and Computer Science and Engineering Department at the College of Science and Engineering of the University of Minnesota, and their respective immediate family members or persons living in their households (whether or not related), and advertising agencies, affiliates and/or subsidiaries of any of the above, are not eligible to enter or win.

Submissions must be made in English. All Challenge-related communication will be in English.

To be eligible to compete, you must comply with all the terms of the Challenge as defined in the Challenge-Specific Agreement – Competitor Retains IP, which will be made available and must be signed by the individual or an authorized representative of the team upon registration.

 

Intellectual Property Rights:

As detailed in the Challenge-Specific Agreement – Competitor Retains IP, each competitor will retain all intellectual property rights in its proposed Solution. Each competitor represents that its proposed Solution is of original development by such competitor, is specifically developed for purposes of the Challenge, and shall not infringe or violate any patent, copyright, trade secret, or other proprietary right of any third party.

 

Right of First Negotiation:

In exchange for the cash prize, each Finalist acknowledges and agrees that for a period of twelve (12) calendar months after the date on which the Grand Prize winner is announced (the “Period”), Land O’Lakes, Inc. shall have a right of first negotiation to become the exclusive licensee of the Solution for purposes of the agricultural industry worldwide. As soon as practicable, and in any event within five days after a Finalists’ receipt of any oral or written offer from a third party during the Period with respect to an exclusive license to the Solution for purposes of the agricultural industry, such Finalist shall notify Land O’Lakes, Inc. in writing of such offer, and if the offer is in writing, provide a copy thereof. Land O’Lakes, Inc. and such Finalist shall have ninety (90) days from the date of such notice to negotiate in good faith, an exclusive license at terms no less favorable than those contained in the offer.  If an exclusive license is agreed upon during such ninety (90) day period, the Finalist and Land O’Lakes, Inc. shall than execute a substantive agreement with respect to such exclusive license.  If Land O’Lakes, Inc. elects, in its sole discretion, not to pursue the exclusive license, the applicable Finalist shall be free to agree to the offer from such third party so long as such agreement does not affect any otherwise existing obligations between such Finalist and Land O’Lakes, Inc.

 

Registration and Submissions:

Submissions must be made online (only), via upload to the HeroX.com website, on or before 4:59pm EST on August 1, 2017. All uploads must be in PDF format. No late submissions will be accepted.

 

Selection of Winners:

Based on the winning criteria, prizes will be awarded per the weighted Judging Criteria section above.

Should multiple submissions equally meet the requirements as stated in the judging criteria, preference will be given to submissions that can overperform (i.e. higher image resolution, ability to image a larger area, etc.)

 

Judging Panel:

The determination of the winners will be made by the sole discretion of Land O’Lakes based on evaluation by a panel of judges selected by Land O'Lakes.

 

Additional Information

  • Any indication of "copying" amongst competitors is grounds for disqualification.
  • All proposed Solutions will go through a process of due diligence; any proposed Solution found to be misrepresentative, plagiarized, or sharing an idea that is not their own will be automatically disqualified.
  • All ineligible applicants will be automatically removed from the Challenge with no recourse or reimbursement.
  • No purchase or payment of any kind is necessary to enter or win the Challenge.
  • Void wherever restricted or prohibited by law.

 

Read more…
3D Robotics

Very cool! From the project repo:

AirSim is a simulator for drones (and soon other vehicles) built on Unreal Engine. It is open-source, cross platform and supports hardware-in-loop with popular flight controllers such as Pixhawk for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped in to any Unreal environment you want.

Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform independent way.

3689710744?profile=original

Development Status

This project is under heavy development. While we are working through our backlog of new features and known issues, we welcome contributions! Our current release is in beta and our APIs are subject to change.

How to Get It

Prerequisites

To get the best experience you will need Pixhawk or compatible device and a RC controller. This enables the "hardware-in-loop simulation" for more realistic experience. Follow these instructions on how to get it, set it up and other alternatives.

Windows

There are two ways to get AirSim working on your machine. Click on below links and follow the instructions.

  1. Build it and use it with Unreal
  2. Use the precompiled binaries

Linux

The official Linux build is expected to arrive in about a couple of weeks. All our current code is cross-platform and CMake enabled so please feel free to play around on other operating systems and report any issues. We would love to make AirSim available on other platforms as well.

How to Use It

Manual flights

Follow the steps above to build and set up the Unreal environment. Plugin your Pixhawk (or compatible device) in your USB port, turn on the RC and press the Play button in Unreal. You should be able to control the drones in the simulator with the RC and fly around. Press F1 key to view several available keyboard shortcuts.

More details

Gathering training data

There are two ways you can generate training data from AirSim for deep learning. The easiest way is to simply press the record button on the lower right corner. This will start writing pose and images for each frame.

record screenshot

If you would like more data logging capabilities and other features, file a feature request or contribute changes. The data logging code is pretty simple and you can modify it to your heart's desire.

A more complex way to generate training data is by writing client code that uses our APIs. This allows you to be in full control of how, what, where and when you want to log data. See the next section for more details.

Programmatic control

The AirSim exposes easy to use APIs in order to retrive data from the drones that includes ground truth, sensor data as well as various images. It also exposes APIs to control the drones in a platform independent way. This allows you to use your code to control different drones platforms, for example, Pixhawk or DJI Matrice, without making changes as well as without having to learn internal protocols details.

These APIs are also available as a part of a separate independent cross-platform library so you can deploy them on an offboard computer on your vehicle. This way you can write and test your code in simulator and later execute it on the real drones. Transfer learning and related research is one of our focus areas.

More details

Paper

You can get additional technical details in our paper (work in progress). Please cite this as:

@techreport{MSR-TR-2017-9,      title =  } 

Contribute

We welcome contributions to help advance research frontiers.

License

This project is released under MIT License. Please review License file for more details.

Read more…
3D Robotics

DIY Drones at 83,000 members

3689710649?profile=original

It's customary and traditional that we celebrate the addition of every 1,000 new members here and share the traffic stats. We've now passed 83,000 members!

Thanks as always to all the community members who make this growth possible, and especially to the administrators and moderators who approve new members, blog posts and otherwise respond to questions and keep the website running smoothly.

Read more…
3D Robotics

3689710039?profile=originalThis project uses RaspberryPi and the Pi camera in a cloud robotics architecture (most processing is done on a laptop over WiFi) and is based on the ForumulaPi code and OpenCV. It costs less than $100 and can perform the full Udacity Self Driving Car challenge (thanks to offboarding most processing to the laptop). Full instructions are here

It's the entry-level design for our DIY Robcars hack/race series, which is a monthly series in Oakland. Next event is Feb 11 for a build day, and then the full race on Feb 18. 

Read more…
3D Robotics

Great simulator of our DIY Robocars track

We have a monthly autonomous car hack/race day at a funky old warehouse in Oakland, but in-between races the teams need to training their neural networks. How? With this simulator!

From the video description:

Some of us are not lucky enough to have a real robot or a real warehouse to test in. But it's fun to dream.

Here's a shot of my run on a virtual track that was inspired by the bay area sd meetup.

The training was done on about 15gb of image steering pairs. The training set was first trained without the final track for about 85% of the way. The final course was available for the last 15%. On each training run I included a large variety of turns, lighting conditions, and concrete coloration.

You can reproduce my results here:
https://github.com/tawnkramer/sdsandbox

load the Warehouse scene. Launch the warehouse nn model:
python predict_server.py warehouse

Read more…
3D Robotics

Finals of the FormulaPi autonomous car race

Love this -- it's both thrilling and charmingly shambolic at the same time.  FormulaPi was the model for our DIYRobocar race/hack series of events.  The race starts at 7:04.

It's interesting that although some competitors tried very ambitious neural-network learning approaches to the contest, the winner just tuned the stock code with some better PID values.

The consensus from everyone is that the Pi Zero just doesn't have enough horsepower for proper AI.  The organizers are considering switching to Raspberry Pi 3 for the next season, which I heartily endorse.

Read more…
3D Robotics

Tiny new uBlox GPS for drones

3689709929?profile=original

u‑blox has announced the launch of the ZOE‑M8G, an ultra‑compact GNSS (Global Navigation Satellite System) receiver module, specially designed for markets where small size, minimal weight and high location precision are essential.

The ZOE‑M8G offers high location accuracy by concurrently connecting to GPS, Galileo and either GLONASS or BeiDou. It also provides -167 dBm of navigation sensitivity. This makes the ZOE‑M8G ideal for wearable devices, unmanned aerial vehicles (UAVs) and asset tracker applications.

The ZOE‑M8G helps simplify product designs, because it is a fully integrated, complete GNSS solution with built‑in SAW‑filter and Low Noise Amplifier (LNA). This means it can be used with passive antennas, without the need for additional components, and without compromising performance.

The ZOE‑M8G GNSS module measures 4.5 mm x 4.5 mm x 1.0 mm. Due to its very small size, a complete GNSS design using a ZOE‑M8G module takes approximately 30% less PCB area compared to a conventional discrete chip design with a CSP chip GNSS receiver.

Uffe Pless, Product Marketing, Positioning Product Center at u‑blox, said: “When you’re designing products such as smart watches, fitness trackers, asset trackers, UBI dongles and even drones, every square millimeter and every gram counts. The u‑blox ZOE‑M8G makes it significantly easier for product designers to achieve precise location tracking while keeping within their strict form factor and weight restrictions.”

Read more…
3D Robotics

Hack/Race DIY Robocars this weekend in Oakland

3689709577?profile=original

Join us for our first hack/race day at the new Oakland warehouse location for DIY Robocar events.

There are three tracks:

1) A RGB lane track, modeled after the FormulaPi track:

600_457592682.jpeg

2) A white-line track for 1/10th scale cars, which is designed to model lanes on real roads. The ones below are just temporary markings; the final lines will be wider and properly stuck to the surface.

600_457592684.jpeg

3) A walled-in course created by the PowerRacers group for PowerWheels-type racers, especially those using LIDAR to navigate 

600_457592691.jpeg

Rules:

For RGB track:

• 10m wide track, with red and green lanes.

• Blue lane dividers in FormulaPi course may be replaced by blank gaps showing concrete instead.  

• If this creates a problem for the cars, we will replace the blue tape

• Course is pretty smooth and most bumps are taped over. 1/2″ clearance under cars should be sufficient

For White-Tape track:

• 1.5 meter (5 foot) wide course with borders in 70-110mm (3-4 inch) wide white tape.

• yellow tape at the centerline 35mm – 55mm (1.5-2 inch) wide

• The course must have at least one left turn, right turn, hairpin turn(1.5m outside radius) and gradual turn ( >3m outside radius)

• Course should fit in a box 20m x 15mUp to 3 cars at a time (for now)

• Course may not be smooth so the car should be able to handle step shaped bumps of up to 25mm (1 inch)

• People will stand 3 meters away from the track.

Other stuff that will be provided: Tables, chairs, power, wifi, coffee, pizza. And interesting fun.

Don't worry if you don't have anything to bring/hack on. You're welcome to just hang out and do the meetup thing. Or help others hack their things. This is that brief moment where we can all be n00bs. Someday our cars are supposed to be perfectly safe and autonomous. But for now, let's hack and crash while we still can. 

Looking forward to seeing you all there!

Read more…
3D Robotics

3689709089?profile=originalIt's been almost a decade since I launched DIY Drones (March 2007) and we've both grown hugely as a community and an industry in the intervening years. Drones work great, they're cheap and widely available, and both the the Dronecode and ArduPilot development communities that got their start here are doing amazing work to advance the technology, which is now at pro/aerospace level. So what's left to DIY? 

How about autonomous cars? We actually got started with GPS-guided autonomous cars seven years ago with the ArduRover project, which is now the standard for outdoor rovers such as those that compete in the Sparkfun Autonomous Vehicle Competition. But as the human-scale self-driving car industry takes off, more sophisticated computer vision and AI technology is the new frontier.  There are scores of companies doing this, from Google and Tesla to Uber, GM, Ford, BWM and virtually all the other big car firms. But what about DIY? 

Sounds like time for a new site/community! Thus DIY Robocars, which I've just launched in beta as a companion to this site. Right now it's just a blog, but I'll turn on the community forum elements in a week or two once I've got the core contributors in place. This one is based on the WordPress/bbPress stack, which should be more robust than the Ning platform that DIY Drones is based on. If it works well, we may port DIY Drones to this, too.

3689709273?profile=original

As you can see, I still don't have a good logo. Any ideas?

Read more…
3D Robotics

NASA simulates Phantom airflow

From NASA:

For decades, NASA has used computer models to simulate the flow of air around aircraft in order to test designs and improve the performance of next-generation vehicles.

At NASA’s Ames Research Center in California’s Silicon Valley, researchers recently used this technique to explore the aerodynamics of a popular example of a small, battery-powered drone, a modified DJI Phantom 3 quadcopter.

The Phantom relies on four whirring rotors to generate enough thrust to lift it and any payload it’s carrying off the ground. Simulations revealed the complex motions of air due to interactions between the vehicle’s rotors and X-shaped frame during flight.

As an experiment, researchers added four more rotors to the vehicle to study the effect on the quadcopter’s performance. This configuration produced a nearly twofold increase in the amount of thrust.

The findings offer new insights into the design of autonomous, heavy-lift, multirotor vehicles for uses such as cargo transportation.

This research was presented at the 2017 American Institute of Aeronautics and Astronautics SciTech Forum in Grapevine, Texas, by Seokkwan Yoon of the NASA Advanced Supercomputing Division at Ames.

Read more…
3D Robotics

3689707997?profile=original

Congrats to the AKAMAV team for the victory and great writeup (via Hackaday)!

The AKAMAV is the proud winner of the IMAV 2016 Outdoor Challange. In this post we want to share the approaches, strategies and experiences we made in the past months.

INTRODUCTION

The IMAV (International Micro Air Vehicle Conference and Competition) is one of the biggest academic drone events these days. It’s hosted by a different university each year. 2016 was the first time it was hosted in China namely by the Beijing Institute of Technology. The amount of work and hospitality the organizing team put into this was one of a kind. Roughly 20 teams from all over the globe attended. If you share a remote interest in flying robots and don’t mind the occasional spectacular crash, this place was Disney Land on steroids with autonomous drones and Chinese noodles.

1.jpg

The competition was parted in an indoor and outdoor part. The goals of the missions were inspired by a hypothetical spill on an oil platform. To save the world from a horrendous hypothetical disaster, drones had to rush in and master the following missions:

OUTDOOR3

  1. takeoff from moving platform
  2. live mapping mission area
  3. precise delivery of a life buoy
  4. collecting and delivering of water sample
  5. landing on moving platform

INDOOR

  1. takeoff from moving platform4
  2. entering a building
  3. mapping the building
  4. picking up and releasing specific objects
  5. exiting the building
  6. landing on the platform

5The complete rule set and all related information can be found here. In a nutshell: Each mission was scored depending on the achived difficulty level (e.g. landing on the moving vs. landing on the standing platform), depending on how small the vehicle was (smaller = better), how autonomous the mission was performed (no human interaction = happy judges) and how many missions were done in a single flight. Two vehicles could operate at the same time. FPV or LOS flying wasn’t permitted outdoors.

APPROACH

So how to prepare for 11 mission? Nothing you do on a weekend. We started roughly 6 months before the event by teaming up with TU Braunschweig’s Institute of Flight Guidance, identified missions which are also relevant for their work and created student thesis projects based on that. Still as always, most of the work happened in the last weeks before the event. Like last year, we focused on the outdoor missions. Most teams choose to concentrate either on the in or outdoor part, otherwise the amount of work would be sheer overwhelming. Yet we tried to solve indoor problems using FPV flying, because WHY NOT? In the last years we aimed for the simplest possible solution, this year we felt a little more adventurous and tried more sophisticated things: Some approaches were more successful than others. Read on for all our dirty secrets!6For the outdoor part two identical 550mm quads with 600g payload were used. As most teams we used the pixhawk autopilot with APM:Copter with an onboard companion PC. Noteworthy other components were a sonar for water pick up, a self designed PPP GPS receiver for precise delivery, lots of servos for dropping things, an IR-Lock infrared beacon for precision landing and a usb-camera on a servo gimbal for live mapping.

For the indoor part we used two 60mm TinyWhoops quads (For starting, entering and landing). If you haven’t flown one of these you haven’t seen the light my friend. For mapping and pick-and-release a 250mm modified fpv-racer was used. The mapping sensor was a 360° camera combined with Pix4D software.

TRAVELING TO CHINA

Limiting to little equipment was new for us and tears were shed during the selections process, but it’s possible! We’ve chosen the capacity of our batteries according to most common airplane transportation guidelines to avoid problems (max. 100Wh batteries). If you want to import multiple custom build drones to china please get the export clearance and don’t do it the day before your flight (lessons learned). By doing so importing our equipment was not a problem. Some teams were not as lucky: Their equipment was seized by the Chinese customs and releasing it took several days. Pro tip: If you forgot your clearance, distribute your stuff on all people and don’t go through customs as a prominent group with flashy boxes. The biggest problem on side was getting a chinese SIM-Card (ignore anything but China Unicom shops).

7

OUTDOOR COMPETITION

The gods of satelite navigation will come to claim their sacrifice.

TAKEOFF MISSION

The first mission consisted of starting from a moving and rocking platform. On the basis of the competition rules we built a moving platform with the same dimensions for testing purposes. It was driven by two electromotors and controlled by an arduino. An extra rocking feature was not implemented because vibrations caused by rough underground appeared sufficient. We then used the moving platform for testing the takeoff sequence. Thereby we have learned two things: Firstly we performed the complete flight controller initialization sequence on a steady surface (no moving or rocking). Secondly, to make sure the MAV is taking off stable, we set the first waypoint which will be approached by the MAV above the centre of the circle in which the platform is moving, to increase the lateral stability during takeoff.

IMG_2053.JPG

At the competition day we decided to take off from the moving non rocking platform which worked pretty good at the first attempt. Subsequently we tried starting from the moving and rocking platform which also worked fine.

LIVE MAPPING MISSION

The outdoor mapping mission was our „rabbit out of the hat“-trick to gather the last bit of extra effort to win the challenge. It was clear from the beginning that only a small amount of teams will have the time to implement and test a robust live mapping system, as the current state of the art does not provide a plug&play solution. One may ask why? Why is there no available solution for something so awesome, it allows you to overview a large area the second you fly over it? Good question, let’s hop in!

The Problem

The nature of real-time mapping involves at least the following components:

  1. UAV with camera (+ 2-axis gimbal stabilization),
  2. processing system,
  3. ground station,
  4. performant data link.

As this sounds like more or less the normal setup for offline map creation, a real-time capable system has high amounts of intercommunications, which makes a whole-in-one solution a difficult task. The camera snaps images every defined seconds, sends them to an on-board processor for further steps. Here you have to choose: Do I calculate the global map now, on-board with a waggon full of resources and take „sending out a growing global map“ as loss? Or do I reduce resources on the UAV, send out every image to the ground station, calculate a solution there and risk lost data packages on the way down? The perfect answer is: Yes!

The most time consuming part for generating high resolution maps live is the visualization. As current computer vision algorithms are rarely performed on full sized images the common approach is to resize the input images down to 20-30%. Afterwards the calculated transformation between input and global reference map is furthermore scaled back up to a wished value, which makes the process of transformation estimation quite performant and fast. However, the visualization does not benefit from those approaches as you have to finally iterate over all of your pixels to make them visible for the user. When using a 2 Megapixels camera that means 2 million iterations every defined second. That is hard to handle for regular desktop computers and even harder for embedded systems. It is therefore smart to leave the easy work for an embedded system on-board the UAV and do the visualization work with the whole processing power of the ground station. Benefit from that is, even though you lose any of the constant sized data packages (composed of image and transformation matrix) you have a global consistent solution still running on the UAV. That is especially great when dealing with a lot of radio interference on competitions. We therefore planned to fuse the two approaches into one, using the pro’s of both.

To curb your thrill of excitement at this point a little bit: We were not able to completely satisfy our goals on this one. The plan was to do the communication with LCM (Lightweight Communications and Marshalling), but in fact we lacked a proper long range wifi connection and time for robust handling. We ended up doing all the work on-board, which is okay for now, because it is only getting faster and better in the future. For a detailed look see our published conference paper.

As we were going for the real-time stitched map, we were also reaching for the stars aiming for the maximum points of 72 in this mission. When using classic approaches like offline photogrammetry without additional automation scripts other teams could only reach 18 points, which is a huge difference.

The Hardware

Below the mapping setup is displayed, weighing 382.1 grams in total. It consists of an UI-3251LE USB 3.0 camera sponsored by IDS Imaging Systems, a 2-axis servo gimbal stabilization and an Odroid XU4 with wifi dongle. Additionally as the wifi communication was not implemented yet, we connected a HDMI2AV-Converter (not in the picture) broadcasting the Ubuntu desktop of the Odroid via a FPV transmitter as analog video for the audience to fulfill the live streaming condition. The whole setup can be seen integrated into 1 of 2 our medium sized UAVs named Mausi the figure below. Mausi is based on an ArduCopter 3DR-D Quad kit with upgraded motors and 2.4 kg total take-off weight. It is able to fly 14 minutes and is controlled by a Pixhawk autopilot.12

The Fall

During the first practice day setting up all relevant parameters of the stitching pipeline we experienced an odd flight behaviour. While in AUTO mode Mausi oscillated around the given GPS trajectory describing the path consequently but roughly. It seemed like the magnetometer had difficulties measuring the right heading or is at least getting influenced by disturbances. After recalibration it got slightly better, but was still vermiculating despite „okay-weather-conditions“. We decided to switch the GPS modules with our second, identical setup, that ran smoothly all day. While trying to remove the I2C plug from the Pixhawk the troubles took its course.

Despite the lovely handling and encouraging words the connector went out taking most of the solder pad with it. After various tries of resoldering we had to admit, that this Pixhawk has decided to not be part of this years IMAV. Here we were, 3 days left until the competition and 50% of our outdoor hardware broke. Good. Start.

The Recovery

After falling into a deep black hole of despair a glance of light appeared at the horizon. Our friends of the Beijing Institute of Technology mobilized all their resources to organize an exact copy of the Pixhawk we broke. On the other hand the electronic powerhouse of the polish Team JEDI decided to help a competitor in need, using all their solder skills to resurrect the dead. After replacing the Pixhawk and loading the backup parameter file (professional!) and calibrating everything, the copter flipped to one side after “take-off”. The problem was solved by resetting everything there is on the pixhawk. What sounds easy as pie is actually close to rocket science, as the Pixhawk tries to desperately save as many core settings as possible and defends them like a wolfmother. It was one day left when we got it running, allowing us to finally test the live mapping setup. Who needs sleep in the first place?3

Competition day

After intensive testing and 3 long hours of sleep it was clear: the mapping is going to be a gamble. The area to be mapped was dotted with small trees (see below). As the algorithm assumes the ground to be a plane every kind of perspective induced by objects is a risk of image misalignment. Additionally the position of the trees and the overall color is not unique, making it an ungrateful environment to realize an image feature based, naturally error-prone stitching pipeline. For situations like this, the implementation provides GPS-only based tracking capabilities to not lose the current position in the global map, even though an image transformation could not be calculated. To do so, the algorithm assumes that image centroid and GPS position are the same. Based on this assumption a scale from pixels to meters between two consecutive centroids can be calculated using the Haversine formula, simultaneously georeferencing the live mapping result. While this task is preferably done using Kalman filtering for sensor fusion, our code snippet was a last minute child of love featuring some ludicrous bugs. Nevertheless with the proud of a father and hope in our hearts, we dared to start the live mapping. And failed. When you decide to set the exposure time manually for more control in visually difficult environment (smog, bright concrete) you should definitely not forget to set it to a proper value at mission start…

41.jpg

5

Luckily we structured our total time well enough to start an additional try in the second run. This time with worse lighting (later day, cloudier sky), better exposure time and an addon fall-back setup for offline mapping. Just in case. We succeeded somehow with a not so beautiful, yet real-time stitched map. Luckily there were no points for consistency of the result. In the end 5 out of 6 targets can be recognized making us one of few teams solving the live mapping mission at IMAV 2016. Cheers!

DELIVERY MISSION

The delivery of the life buoy had a low barrier of entry in comparison to other missions like the mapping and was therefore an easy way to collect some points. But not completely without any challenges. The focus was on a precise delivery with only a small payload weighing less than 100 g. And this is where the trouble starts! How to achieve navigation with an accuracy way below 5 m? Below the life buoys and the given delivery zone for the life buoy during the competition are shown .

Initially our answer was to use a precise point positioning (PPP) GPS receiver, increasing the positioning accuracy in comparison to an ordinary GPS receiver down to some decimeters and enough to fulfill the mission requirements. At that time there was no PPP-GPS Receiver on the market, as a result we decided to develop and manufacture one on our own. As usual, it takes more time than expected and does not work on first attempt. Especially if you only have few experience with board designing. In consequence, the receiver was not working properly up to the competition. It found satellites but with a bad signal strength less than 30 dB. To achieve a GPS fix at least 4 satellites with a strength better than 35 dB are necessary.

13.png

Our guess at the moment is that the antenna path is damping the incoming satellite signals. One of many design rules for PCB layouts working with high frequency signals says the following: don’t use right angles, otherwise the signal is distorted and damped, silly human! If you look at the PCB design shown in the figure above, what can you see in the antenna path? Yes, a right angle! And that seems to be one reason, why it is not working properly. We are investigating the layout more precisely and will update this page if we find out more. In the end we decided to discard this plan and go for the standard GPS receiver, being aware of the positioning gamble.

The strategy to release the life buoy was simpler and safer. We attached the payload with a small rope to a single servo, getting it dropped to the distinct position by an easy PWM trigger. The position of the delivery zone itself was given in Lat/Lon-coordinates during the competition day by the judges. To reduce the risk of losing the GPS fix in the drop zone, we decided to deliver the life buoy at a height of 6 m, avoiding signal shading by the trees in the near surrounding.

Luckily differential GPS (DGPS) was available which increased the positioning accuracy a little bit. At our first attempt the MAV hit the 3 m circle of the delivery zone, which is really great with the described setup.

WATER SAMPLING MISSION

The main requirement for the water sampling mission was also set to a precise navigation. With only 20 ml total water volume the weight of the probe was negligible. However to achieve the demanded positioning accuracy for our initial design we decided to go for a really sophisticated approach. Our chosen strategy can be distinguished into the following steps:

  • Take off
  • Fly to the water collecting zone in a distinct height
  • Unwind a bottle
  • Take a water probe
  • Rewind the bottle
  • Fly back to the water dropping zone
  • Drop the water probe
  • Land

The figure further down shows the general system structure. Parts striked through were part of the initial design, but were removed during testing. To operate the water sampling mechanism a Raspberry Pi single board computer (SBC) was used. The communication link between the pixhawk autopilot and the SBC was established via MavLink over USB. To Measure the height we planned to use a LightWare LiDaR which is more precise than a barometer and capable to measure distances up to 130m. The detailed water sampling mechanism is depicted in figure below. The total weight of the mechanism is 313g.

16.jpg

To unwind and rewind the bottle, a digital high torque servo was used. The bottle can be filled with water over the site holes. To release the water probe the bottle will be pull up till the limit stop switch is triggered. By increasing the torque again the spring will be compressed up to the plug is pushed out. The communication link between autopilot and SBC was needed to trigger the SBC at the right time to perform the described sequence. For example at a distinct GPS position and height.

Nevertheless the described release mechanism worked, at least the mechanical part was working. All along the weak spot was the communication link between the autopilot and the RaspBerry Pi. And here the real story begins. We started the integration phase only one and a half week before the competition. As was expected the time ran over and the communication link was not working properly. Unfortunately the servo gave up working during integration. Additionally the design of the driver for the wind was crude and too complex. Arrived in china, we gave it a last chance for one test. But also the final test was unsuccessful, by still unknown reasons.

14.jpg Up to the last minute

We had a self-given deadline until which solutions had to work or they were replaced. Sadly the communication link to the winch based water sampling didn’t work until this date. In consequence we dropped it and went back to a simple servo-drops-something solution. But what shall it drop?

15.png

To speed up the whole process, we made up a small challenge. We build up three different mechanisms and chose the best. You can see the result of the challenge on the figure below:

The idea of the final design was as following. Take a bottle with fill in holes at the center of the bottle and attach two ropes to the bottle, one at the neck and the other at the bottom. Both ropes are fixed by a single servo at the MAV and can be released independently. In comparison to the initial design we had no wind. Thus the problem was that waypoints underneath 0 m can not be approached by the autopilot, so there were approximately 6 m of rope attached underneath the UAV to compensate air pressure fluctuations. The smallest squall is sufficient and the whole thing starts turning. Therefore we attached chopsticks in the same distances to each other, to avoid unintentional twisting of the ropes. To drop water the rope attached to the neck of the bottle will be released first, as shown in the figure above.

We were not sure to meet the dropping zone exactly. Even though to reach the water tank we attached small holes at the neck of the bottle which peformed like a shower head.

Lessons learned

Even though finally the initial design was not working it was a great experience and we have learned a lot. The most we have learned is that a complex system takes at least 20 percent of the scheduled time only for integration. So start on time!

19.png

PLATFORM LANDING MISSION

In the early design phase we decided to use the pixy cam with an associated IR beacon for precision landing, because of the easy integration in the pixhawk autopilot system. The whole computer vision process is directly done on the PixyCam and you don’t have to think about it. The connection between autopilot and PixyCam was established via I²C. Basicly the sideways divergences to the target will be transmitted and handled by the autopilot. The camera is equipped with an IR filter to prevent visible light interferences. In addition the IR beacon emits a specific LED pattern at a distinct frequency increasing the redundancy all the more. As the initial approach we attached the pixy cam with a gimbal under the MAV. However the achieved results were even worse the direct mounted cam. Therefore we attached the camera without any gimbal directly under the MAV, which worked pretty good.

The precision land was tested on static and moving targets. Tests resulted that the MAV is following a moving target up to 1 m/s ground speed. At higher speeds the IR beacon is moving too fast out of the field of view of the pixy cam and the MAV loses the track. On static targets we reached a precision between to 5 to 10 cm.

At that point we realized it was too challenging to implement a precision land on a moving platform with the chosen sensor, so we decided to approach at the competition the land only on a static platform.

To get a more precise view of the capabilities of the sensor we build up a simple debugging tool, to observe the status of the camera during flight, to get to know if the beacon is in the field of view or not. Therefore we took advantage of a simple status LED on the camera module. The LED was performing exactly what we need, but sadly too dark. So without further ado we placed a big LED array, to ensure visibility from the ground. Mostly we were interested in the field of view and the resulting recognition range for better path planning later on. In general the autopilot is processing the measured sideways distances as soons as the the MAV is in land mode and is performing from there a precision land. Which means the last waypoint, before land, should be near to the beacon. The results of the tests revealed that the camera reliable recognize the beacon in a height up to 12 m and a radius of 3 – 4 m.

Well prepared for the competition, we performed on the competition day a precision land on the static platform at the first try.

INDOOR COMPETITION

The indoor part took place in BIT’s indoor sport arena. A one story building with 4 rooms, one door, one window and one chimney was set up, the missions were

  1. takeoff from moving platform
  2. entering a building
  3. mapping the building
  4. picking up and releasing specific objects
  5. exiting the building
  6. landing on the platform

1

ENTERING BUILDING MISSION

Unlike the outdoor part, FPV flight was (still?) allowed indoors. We choose to fly everything FPV since the implementing of a robust LiDAR or Vision based navigation would have been too time consuming. Takeoff, entering, exiting and Landing on the plattform was done using a TinyWhoop FPV Copter. Due to the size of only a few Centimeter the size factor was high and we were able to master all the planned tasks on the first flight without any problems. The original FPV footage can be seen on youtube.

2

PICK AND RELEASE MISSION

A solution for picking up and releasing the small buckets was realized using a modified 250mm FPV racer. The vehicle was equipped with propeller protection to minimize the impact of contact with walls etc. A servo controlled “lance” was mounted in front of the FPV camera in order to pick and release the pucket. Initial tests showed that the collecting of the bucket was hard due to its location on a table. The downwash of the approaching copter would interfere with the table surface right before the copter would reach the bucket. This situation impeded the manual altitude control heavily making it really hard to hit the small target with the lance. The attempt to solve this using a downward facing ultrasound sensor were not successful due to difficulties in tuning the altitude control loop on short notice.

3

Therefor the landing gear was extended to a length which allowed the copter to land in front of the bucket and pick it up using the lance once the servo was triggered. When the vehicle hovered over the target area, the bucket would be dropped using the servo. Aiming was realized using a second FPV camera/transmitter combo. A Copilot was then needed to guide the pilot. While the bucket collecting and releasing worked under good conditions in our lab it didn’t on the test side. The available space for landing the copter on the table was so small that every small error caused the copter to hit an object, causing a crash. Yet no other team managed to do this either.

We lost a lot of landing gear in these crashes. Luckily we found a greatly available resource for replacement: chopsticks!

360° MAPPING MISSION

The indoor mapping mission was another challenging task. The tasks stated that 3 unknown rooms had to be mapped. Most points were given when the created map was in 3D and the furniture in each room recognisable.

Since we had little expierience using LiDAR or optical SLAM algorithms we decided to use offline photogrammetry for mapping. We had good results using commercial software like Agisoft and Pix4D in outdoor scenarios and wanted to use these idoor too. UAV based photogrammetry is a little tricky indoor, since you need to plan your flight more carefully to garantee the needed overlap in every image. Spheric (also known as 360°) cameras seemed like a clever solution to garantee this overlap while recording the room in any angle (no missing information). Pix4D also started supporting 360° images and named the Ricoh Theta S as the lightest supported camera, which we therefore used. We mounted the camera on the 250mm racing quad and started testing.

01-800x800.jpg

The camera can be controlled with a smartphone app and many settings can be set manually. The camera produces good spheric images and videos without any additional processing. The biggest drawback is the limited final resolution (Video 2MP / Photo 14MP). The created images and videos looked stunning, this technology is here to stay. The results from our mapping approaches were also good, IF the conditions were good. The image displays a dense cloud created with Pix4D using 360° images on the attic of the IFF. You can also see the 3D point cloud on sketchfab.

mesh.png

The software was able to create good results when all of the following criteria were fullfilled:

  • a lot of overlap (like ~10 images per room or every ~0.5m)
  • a lot of features (no regular white walls etc.)
  • enough light (beware of neon light frequency induced artefacts)
  • static scene (try to hide behind the camera… oh!)

When any of these criteria weren’t met Pix4D wasn’t able to align the images. In the current state, the workflow is much more sensible regarding these disturbances compared to “regular” images, therefore the use of 360° cameras didn’t really simplify the process.

For mapping the camera was put in interval shooting mode (one image every 8 seconds at full resolution) while the vehicle was slowly flown through the rooms using FPV. The processing took about 15min (using low quality settings). Sadly the mapping did not work in the rooms prepared for the IMAV indoor competition. The main reason beeing the lack of features. While the walls were covered with bits of tape, this was not enough. Also, the walls of the building were very sleek, which lead to always changing reflections of the light sources. This 3D point cloud was the successfull result we got during the test day.

room.jpg

FINAL WORDS

We learned a lot in the weeks leading to this event and hope we were able to pass on some of our insights. Team AKAMAV will be in Touluse at IMAV 2017 to meet all of you wonderful nerds. We are always curious to meet new people! If you are a student, drone enthusiast or professional who wants to contribute, learn or simply geek around with us, please come! We have much more ideas then time on our hand.

Outdoor Competition Scores

  1. AKAMAV, Germany 279.7
  2. ISAE, France 225.6
  3. SCV, Korea 205.7
  4. AUTMAV, Iran 167.4
  5. FanSO, China 155.3
  6. JEDI SG, Poland 84.4
  7. MRL-QIAU, Iran 84.0
  8. BlackBee, Brazil 63.6
  9. MAVLab, the Netherlands 57.6
  10. Cigogne, France 46.2

 

Indoor Competition Scores

  1. KN2C, Iran 423.9
  2. Quetzalcuauhtli, Mexico 361.7
  3. MAVLab, the Netherlands 271.1
  4. CVG-UPM, Spain 268.1
  5. JEDI SG, Poland 253.1
  6. ARC, Iran 218.3
  7. AUTMAV, Iran 198.6
  8. UI-AI, Iran 81.2
  9. AKAMAV, Germany 43.8
  10. MRL-QIAU, Iran 21.8
  11. Persian Gulf, Iran 20.9
  12. BlackBee, Brazil 0
  13. EuroAvionic, Poland 0

Last but definitely not least we want to thank all of our sponsors, namely the Institute of Flight Guidance, TU Braunschweig, GOM GmbH, IDS Imaging Development Systems GmbH and u-blox Holding AG. A special thanks goes out to our team leader Mario Gäbel for doing so much development work and still being able to manage and organize our tour and financial paperwork. Also a big shout out to the rest of the team, namely: Benjamin Hülsen for rock-solid electronic layouts, Dimitri Zhukov for unbreakable constructions, Alexander Kern for computer vision that never stops tracking, Endres Kathe for not marrying an Arduino and Markus Bobbe for destroying only one of our Pixhawks.

Happy Flying!

Read more…
3D Robotics

Pixhawk-powered submarine shown at CES

From TTRobotics

Hydrodynamic Design
With professional hydrodynamic design, SEADRAGON can be freely in the water, underwater hover or free exploration can be easily implemented. The main compartment is airtight and cylindrically designed to protect important electronic equipment from moisture. Built-in huge capacity of battery offers longer run time (external power source as option).

Live 1080p HD Camera
SEADRAGON uses CUDA cores for real time vision and sensor fusion. You can use on-board cameras to do real-time HD video streaming of the underwater adventure.

Ideally Unlimited Control Distance (4G application)
SEADRAGON offers wired or 4G / LTE signal control. Provides end-to-end encryption with mutual certificate-based authentication and Video streaming function from multiple on-board cameras. You can control your SEADRAGON directly from your browser in any location.

Underwater Thruster Designed Specifically for Marine Robotics
Main tank body can adjust the counterweight to different environment of sea and fresh water. The brushless electric thruster system allows SEADRAGON to move quickly and quietly with a maximum forward speed of up to 2 knots.

High Scalability and Programmability
SEADRAGON uses Pixhawk autopilot chip, which can be programmed in Python (w/OpenCV) language. High compatibility and high scalability, you can customize the way you use SEADRAGON and build your own automated applications.

Read more…
3D Robotics

Volta extends from drones to autonomous cars

Our friends at Volta robotics have extended their MAVLink-compatible autopilot series to autonomous cars. This is what they demonstrated at CES this week:

Product Description

LEAD TIME: 60 days (due to post-CES orders peak) + 2 days international shipping

REDUX: same as BASE1 Rover, but reduced computing performance.

BASE 1 STACK

BASE 1 is an advanced and complete stack to remotely control unmanned vehicles. By elevating control capabilities to an higher base, it’s a solid starting point to create new services, projects and applications.

  • Cloud Station Gateway (for browser-based control, connect your vehicle here)
    • Remote commands (e.g: GoTo, Takeoff, Dive, Set Mode)
    • Remote file transfer (including: Neural Networks, Groud Truth, Python Scripts, MAV Mission Files)
    • Remote triggers (Neural Networks, Python Scripts, Auto Missions)
  • Languages: Python (w/ OpenCV)
  • Self-Driving-Rover Sample Neural Network
  • Deep Learning Frameworks: Caffe, Tensorflow
  • Gstreamer, MavProxy, MavLink
  • Ubuntu 14.04 LTS

BASE 1 HARDWARE

  • Main BOARD:
    • ODROID XU4 OCTA CORE
    • 4G/LTE External Modem
    • 16 GB eMMC
  • All wired, turnkey.

TT-ROBOTIX ROVER VEHICLE

  • Weight:  8.1kg
  • Dimensions:  531mm x 418mm x 321mm
  • Maximum Payload:  15kg
  • Max Speed:  18km
  • User Power:  AC 110V, 5V, 12V
  • Runtime:  2 hours with 4s Batteries
  • Modular structures – perfect for development, simple for production
  • Precise actuators – for smooth movements in small spaces
  • Wide case – for agile sensors / boards placement; batteries for long travels and more
  • Network Technology:  4G/LTE and Wifi ready
  • Independent Power Source for Electronics and Motors
  • Translucent front glass, for visual inspections
  • Modular roof, with several sensors-mount possibilities (including cameras)
  • Pixhawk (or any other ArduRover autopilot)
  • Logitech HD 1080 Pixel camera (h.264 native, Zeiss lens, autofocus).

PERFORMANCE VIDEO

Read more…
3D Robotics

3689707807?profile=original

Great news from Don Lake

Announcing the release of QGroundControl V3.1. Notable changes in this release:
- Survey mission support
- GeoFence Support in Plan View
- Rally Point support in Plan View (ArduPilot only)
- ArduPilot onboard compass calibration
- Parameter editor search will now search as you type for quicker access
- Parameter display now supports unit conversion
- GeoTag images from log files (PX4 only)
- System health in instrument panel
- Mavlink 2.0 support (no signing yet)

Updated docs and install instructions are here: https://donlakeflyer.gitbooks.io/qgroundcontrol-user-guide/content/11

Read more…