Lorenz Meier's Posts (8)

Sort by
Developer

cortex-m7-400mhz.jpg?width=700

Cross-post from my original G+ plus post, based on a The Register article: ARM just announced a new embedded core, the Cortex M7 with up to 400 MHz clock speed. The CPU is in general faster and has an improved pipeline, the real game changer four our drone applications is however the double precision hardware floating point unit. In particular for global coordinate systems (e.g. using GPS) having double precision available in hardware is quite enabling. Interestingly ARM decided to use 3D Robotics (and the Y6 and Iris graphics) and the PX4 project (which has its origins at ETH Zurich) as use case for how their new micro controller could impact the market.

As a a long-time onboard Linux user we are happy to see more and less expensive Linux single board computers becoming available, but interrupt latency, context switch overhead and I/O will remain challenging topics for all-Linux solutions, and so improvements in the low-cost micro controller domain will remain relevant and interesting.

After all, one of the recent novelties claimed by Apple for their iPhone was the addition of their motion processing co-processor, which turns the current iPhone generations into the same design pattern that we have been using a long-time in research: A low-latency IMU/autopilot coupled with a full-fledged mobile computer, combining the best of both worlds in terms of flexibility and latency.

Read more…
Developer
3689564588?profile=original
One of the widely ignored (in open source projects, not in industry) sources of error in attitude and inertial estimation are temperature dependent offsets and scaling changes (some open source systems do, and we would love to hear about how you are doing it - the purpose of this post is to gather the community knowledge). Some systems estimate them online with a bias state. However, not all of them can be easily and reliably estimated online, and bias estimation always poses the risk that the designer makes too strong assumptions about observability. Consequently the quality of the estimate then depends on the flight pattern (straight line vs. circling) and GPS accuracy.
It is therefore always a good idea to estimate temperature effects upfront (if we reliably can) and remove all known effects from the measurement before estimating the remaining offsets online.
Please find below the link to a 1500 seconds (25 minutes) dataset showing the MPU6K warming from 5 deg to 40 deg Celsius.
3689564768?profile=original
The temperature drift is obvious. The link contains a ZIP file with a complete Matlab dataset and scripts to plot the data. You will find that tempdrift.m does all the plotting and is already set up with all convenience logic - you just need to insert your compensation function.
The dataset is however not perfect: It might very well turn out that we need multiple (e.g. six) orientations to differentiate between accel offsets and accel scaling changes, and we might need constant rates (e.g. from a record player) for the same differentiation between offset and scaling for the gyroscopes. Please get in touch via the mailing list to request additional data and describe what you want to have.
Note that we already know that there are two different offsets / scalings: The per-boot offset of the sensor, which only varies mildly. That’s why 10 reference boots at different temperatures are attached for validation. And the temperature-induced general offset (this might be per-device or per instance, we’ll see).
Please get in touch if you have questions. To make this more fun and give it a personal touch: I’m offering a three-course dinner at a local restaurant if we ever manage to meet (I’m traveling a lot). I will offer this to the best solution once I end the competition :).
The benchmark is clear: Show that you can measure the Earths rotation rate of 15 deg/hour. I know its not impossible with today’s MEMS sensors - I have no baseline on the MPU6K though, from which this dataset originates (the MAG is the LSM303D's internal mag).
(photograph CC-by-SA Wikipedia)
Read more…
Developer

The Computer Vision and Geometry Lab of ETH Zurich (Swiss Federal Institute of Technology) might be in this community mostly known for Pixhawk and PX4, but our research focus is in fact 3D reconstruction, which means to create 3D models from 2D images (mostly, but we also use RGB-D cameras).

To reconstruct an object, the user takes a number of images from different viewpoints around the object and the 3D points will be rendered as they have been measured over multiple images, iteratively building the complete 3D model. Use cases include culturage heritage (like the objects shown in the video), building 3D photos / statuettes of your family or even reconstructing a couch (or a heating system) to see the if the furniture fits your living room or the heating appliance fits your basement. 3D reconstructions will become as usual as 2D pictures are today.

The novelty in what we presented now for the first time publicly at the International Conference on Computer Vision in Sydney, Australia lies in that the 3D model is created interactively directly on the phone, allowing the user to intuitively understand if he has acquired enough and the right images from enough viewpoints to create a complete 3D model. The previous state of the art required to take images, upload them to a server and only get a 3D model back many minutes later. We still can leverage the cloud computing power to refine the model obtained on the phone.

It can be compared to the move from analog to digital photography: A digital camera allows to preview how the image looks while (or shortly after) taking it, as does our approach. The previous state of the art meant to take images and only get the result back later, potentially after having left the scene and not being able to take a better picture or adjust the viewpoint.

These results are highly relevant for sUAS / micro air vehicles, because the processing boards available for these small platforms use the same type of processors as mobile phones. Because our technology also provides camera position feedback (suitable to steer the aircraft without GPS at a rate of 20-50 Hz and ~20-100 ms latency) it can be used to autonomously reconstruct larger objects. While vision based flight has been demonstrated earlier successfully (e.g. by our group or our colleagues from ASL), the results obtained on the mobile phone add a dense 3D point cloud (allowing things like terrain following and autonomous waypoint planning around the object) and add very efficient processing on top. We do plan to leverage both normal cameras and RGB-D cameras as they become available in small form factors.

The app is at this point a technology demo and not available for download. Our intent is however to bring a demo version of it into end-users hands as soon as possible, but we can't provide a date yet.

 

 

3689561891?profile=original

Read more…
Developer

3689540264?profile=original

After being in service since 2009, its time to evolve MAVLink and add encryption, starting with a Request for Comments (RFC). The initial MAVLink protocol had only the scope to serve for the PIXHAWK student project at ETH, and hence security was left for the link layer. Obviously the adoption is far beyond the scope of the original design by now. After Arthur's great job on the MAVLink Wikipedia article there is no further need to explain what MAVLink exactly is, but there is one important insight: The adoption and use exceeds by far the original design space, in particular because typical links do not provide authentication and encryption, which was originally assumed.

With more and wider adoption, the exposure of the protocol increases, and so the next revision (which will be backwards compatible if by any means technically possible) will add support for authentication and encryption.

The purpose of this post is to ask for comments on the very early draft of a request for comments call posted today. We're interested in technical contributions (specs, code, testing) but also in general considerations. At this point it is too early to discuss which hardware it can be run on, it will certainly run fine on PX4-generation systems (a first test of the crypto primitives is here), and there is currently no reason to assume that other existing aircraft setups can't be upgraded in a modular fashion as well. Please respond directly for the mailing list thread here for comments on the RFC, for general comments on this post feel free to use the comment box.

Read more…
Developer

QGroundControl Update

3689525987?profile=original

After investing most time into PX4 for quite a long time now, the PX4 native stack has made it into a shape where we can fly fixed wing in autonomous mode with it, and there is also progress on multicopters. This put QGroundControl back into the development focus, since the field tests showed some potential improvements, and the PX4 user feedback indicated that features like RC calibration were highly wanted. The screenshots below show the current 1.0.9 version.

In parallel there is work under way from the 3DR dev team under Michael Obornes lead to add some APM-specific features  to QGroundControl and to bring some of the very well engineered features of Mission Planner to a cross platform codebase. One of the visually most prominent and first features will be the Primary Flight Display, but there is much more to come.

A first Mac OS binary is available on the QGroundControl downloads page, a windows binary will follow shortly.

3689525831?profile=original

3689526061?profile=original3689525845?profile=original3689526086?profile=original

Read more…
Developer

MAVLink 1.1 Request for Comments

3689393223?profile=original

RFC post here. After more than one year of service, the MAVLink 1.0 spec needs an update to 1.1 to adjust for developments made on autopilot platforms (new messages) and to clarify and improve some aspects of the protocol. In contrast to the upgrade from 0.9 to 1.0 the intent is to make this a 100% compatible update, so that autopilots and ground control stations can benefit from the new functionality, but won't be left behind when not immediately upgrading.

This post is mostly directed at interested users and implementers and make the general public aware that protocol change proposals should be handed in now.

Please register for the mailing list in order to be able to post.

LINK TO RFT POST

Read more…
Developer

Providing the user / pilot a good feedback is very important.. Right now most autopilots just have 1-3 LEDs, but no interaction concept. We would like to collect some feedback on this topic. It takes less than 60 seconds to complete and will help us a lot: Link to survey

Based on this feedback and general UI concepts we're trying to come up with a consistent interaction concept using the different LEDs, switches and buzzers to make the operation of the PX4 hardware as user friendly and safe as possible.

Read more…