You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Awesome video!

  • Thanks for the refs, I'll check them out. @Gary--http://www.pmdtec.com/products_services/reference_design.php 

    + sensor fusion locked onto google maps geometry @ 100hz. Ok, well this thing is for indoors, but still--light doesn't care much about distance.

  • Most of my focus at the moment is on computer vision, so I won't claim to be up to date with the very latest papers on MPC and receding horizon control (the latter term more common in robotics). Tom Schouwenaars has been doing some great work in this area, working on MAVs and copters, since his PhD (2006 iirc) and Jonathan How (his lab leader) has done a lot of ground-breaking work in autonomous vehicles research. Check out their recent work on multi-vehicle guidance (it's pretty cool). There's also a decent survey paper of MPC by Holkar (2010) in Journal of Control and Automation, which reviews the leading MPC algorithms/strategies.

  • Doing it the way they do it isn't cheating, it does let them do spectacular things in a (wired) setting.

    For us, SLAM, 3D cameras, Optical Flow, Structured Light, Laser Scanners and some serious on board computing power are going to let us do it in a way that is actually useful.

  • Yeah, the mocap demos have something like the effect of hollywood bots in spoiling the glory of the real bots. What I was getting at is that--at least based on the specs & physical/computation constraints, it appears that the current generation should be able to do much of this, if not so perfectly, in such a perfectly controlled environment. But similar, in terms of dynamics & coordination. I'd be interesting in reading the recent landmark papers on model predictive control, limiting conditions & such if you know of em.

  • What I said was that there is a significant performance difference between GPS and MoCap and as such, labelling MoCap as "indoor GPS" is really just a way of having a lay-audience understand the principal of using an external set of sensors to provide position/attitude/rate data. Of course, there's also an intrinsic difference in how they determine this information... MoCap does the computation off-board, whereas GPS uses a differences in the transmission of a constant signal from multiple sources to compute, onboard, this information.

    As for hardware... their old onboard hardware is well out of date and they know that. I had a discussion with Lorenz about this last year and they are evolving their hardware away from their 2009 spec and replacing their onboad systems with either new off-the shelf components or the PX4 system, which is of course an offshoot of their research and learning over the past 5 years.

    As for your last comment J, I'm not in any way attempting to degrade their success... indeed, because of their trail-blazing, it makes it far easier for people like me to get (a) money and (b) research students to go in other directions, using projects such as the PX4 and APM as a basis to create advanced research systems. I do feel though that there is some hype abounding around the capabilities demonstrated using off-board processing for model predictive control (not just by ETH, but also groups such as the GRASP lab)... I was doing onboard model predictive control for adaptive navigation in real time on UAVs 15 years ago (and this was in highly nonlinear, stochastic wind fields, so not a trivial domain). Sure, the frame times weren't millisecond duration as they are today (they were of the order of seconds), but the computations are the same and can be trivially applied to quadrotors today, taking advantage of modern multi-core embedded processors. This is why I don't consider this style of control to be bleeding edge. Doing all this without the aid of external observation systems (GPS, MoCap, etc.) is the challenging problem!

  • So you're basically saying it's not basically using basically an indoor GPS as they claim.

    The most power hungry onboard system they evaluated--core duo, was 230g, 15% of the bots power budget, & this computer was a lethargic & hungry pig by today's standards. Their FPGA based optical flow performed about the same as the off-board system, or so it appears in their paper. Just a few watts & is light. This is an indication to me they are already there hardware-wise, & the off-board system is a prototype for ease of development & demos.

    The bleeding edge portion of their control system, at least bleeding edge for hardware folk afraid to release control, is their operating system. Training wheels which won't  impress Dennis Ritchie, but it's demonstration does much to remove the mythology surrounding them.  It's critical for moving forward in utilizing multiple cores, networked systems,  & not tripping over packaging & organizational issues.

    This concept of vision sitting on top of a bare metal embedded system is becoming blurred in favor of something more appropriate to current hardware & these guys are doing a fine job of exploring that & sharing their results.

  • There is a huge difference between GPS and marker-based motion capture. Commercial MoCap systems vary, but a practical update frequency for full 12 DoF estimation is around 120Hz. Depending on the system, you generally have millimeter accuracy in position (and proportionate accuracy in rates at the selected update frequency). GPS cannot compete with this, mostly because the sensors are 24,000km away, rather than a few meters.

    The ETH group is not specifically doing anything bleeding edge in terms of control theory, but what they do (and excel at) is showing how modern methods can be applied in real time, in challenging applications. The use of motion capture technology is necessary for their approach of trajectory generation and control, also because to do that computation on board would require considerably more weight from batteries and computers and consequently diminish performance. One of the clear messages from the presentation is that as computational power continues to cost less (in terms of weight and energy) and as sensors improve, we can move away from external sensors/computation to onboard systems.

    The bleeding edge is definitely in vision-based control and using vision to close stabilisation and navigation control loops... but we're not at commercial vision systems for micro air vehicles yet.

  • Hey, he says it's "basically an indoor GPS system". Should therefore basically work the same on the outdoor GPS system when he brings it outside.

    The Autoquad looks as good or better in performance--would be nice to find a basis of comparison. But there's nothing out there that comes close in terms of extensibility & collaboration. They take full advantage of a real, fully featured OS, with no evidence of interrupt latency issues which everyone is so afraid of. They've got a publish & subscribe system encompassing most of the relevant components (don't know why they didn't use ROS). Hell, they've even got a shell, top, wget, & sendmail--everything but the rsync!

  • Now try doing all this without the aid of vicon trackers!!, this seems like a trivial task when you have accurate 3d position and orientation information..More interesting would be robust visual odometry, to replace the trackers..

This reply was deleted.