We've now had a few days to get back into a normal routine after the excitement of the Outback Challenge last week, so I thought this is a good time to write up a report on how things went for team I was part of, CanberraUAV. There have been plenty of posts already about the competition results, so I will concentrate on the technical details of our OBC entry, what went right and what went badly wrong.

For comparison, you might like to have a look at the debrief I wrote two years ago for the 2012 OBC. In 2012 there were far fewer teams, and nobody won the grand prize, although CanberraUAV went quite close. This year there were more than 3 times as many teams and 4 teams completed the criterion for the challenge, with the winner coming down to points. That means the Outback Challenge is well and truly "done", which reflects the huge advances in amateur UAV technology over the last few years.

The drama for CanberraUAV really started several weeks before the challenge. Our primary competition aircraft was the "Bushmaster", a custom built tail dragger with 50cc petrol motor designed by our chief pilot, Jack Pittar. Jack had designed the aircraft to have both plenty of room inside for whatever payload the rest of the team came up with, and to easily fit in the back of a station wagon. To achieve this he had designed it with a novel folding fuselage. Jack (along with several other team members) had put hundreds of hours into building the Bushmaster and had done a great job. It was beautifully laid out inside, and really was a purpose built Outback Challenge aircraft.


Just a couple of days after our D3 deliverable was sent in we had an unfortunate accident. A member of our local flying club was flying his CAP232 aerobatic plane at the same time we we were doing a test mission, and he misjudged a loop. The CAP232 loop went right through the rear of the Bushmaster fuselage, slicing it off. The Bushmaster hit the ground at 180 km/h, with predictable results.


Jack and Greg set to work on a huge effort to build a new Bushmaster, but we didn't manage to get it done in time. Luckily the OBC organisers allowed us to switch to our backup aircraft, a VQ Porter 2.7m RTF with some customisations. We had included the Porter in our D2 deliverable just in case it was needed, and already had plenty of autonomous hours up on it as we had been using it as a test aircraft for all our systems. It was the same basic style as the Bushmaster (a petrol powered tail dragger), but was a bit more cramped inside for fitting our equipment and used a bit smaller engine (a DLE35).

Strategy

Our basic strategy this year was the same as in 2012. We would search at a relatively low altitude (100m AGL this year, compared to 90m AGL in 2012), with a interleaved "mow the lawn" pattern. This year we setup the search with 60% overlap compared with 50% last year, and we had a longer turn around area at the end of each search leg to ensure the aircraft got back on track fully before it entered the search area. With that search pattern and a target airspeed of 28m/s we expected to cover the whole search area in 20 minutes, then we would cover it again in the reverse direction (with a 50m offset) if we didn't find Joe on the first pass.

As with 2012 we used an on-board computer to autonomously find Joe. This year we used an Odroid-XU, which is a quad core system running at 1.6GHz. That gave us a lot more CPU power than in 2012 (when we used a pandaboard), which allowed us to use more CPU intensive image recognition code. We did the first level histogram scanning at the full camera resolution this year (1280x960), whereas in 2012 we had run the first level scan at 640x480 to save CPU. That is why we were happy to fly a bit higher this year.

While the basic approach to image recognition was the same, we had improved the details of the implementation a lot in the last two years, with hundreds of small changes to the image scoring, communications and user interface. Using our 2012 image data as a guide (along with numerous test flights at our local flying field) we had refined the cuav code to provide much better object discrimination, and also to cope better with communication failures. We were pretty confident it could find Joe very reliably.

The takeoff

When the running order was drawn from a hat we were the 2nd last team on the list, so we ended up flying on the Friday. We'd been up early each morning anyway in case the order was changed (which does happen sometimes), and we'd actually been called out to the airfield on the Thursday afternoon, but didn't end up flying then due to high wind.

Our time to takeoff finally came just before 8am Friday morning. As with 2012 we did an auto takeoff, using the new tail-dragger takeoff coded added to APM:Plane just a couple of months before.

Unfortunately the takeoff did not go as planned. In fact, we were darn lucky the plane got into the air at all! As soon as Jack flicked the switch on the transmitter to put the plane into AUTO it started veering left on the runway, and nearly scraped a wing as it limped it's way into the air. A couple of seconds later it came quite close to the tents where the OBC organisers and our GCS was located.

It did get off the ground though, and missed the tents while it climbed, finally switching to normal navigation to the 2nd waypoint once it got to an altitude of 40m. Jack had his finger on the switch on the transmitter which would have taken manual control and very nearly aborted the takeoff. This would have to go down as one of the worst takeoffs in OBC history.

So why did it go so badly wrong? My initial analysis when I looked at the logs later was that the wind had pushed the plane sideways. After examining the logs more carefully though I discovered that while the wind did play a part, the biggest issue was the compass. For some reason the compass offsets we had loaded were quite different from the ones we had been testing with in all our flights in Canberra. I still don't know why the offsets changed, although the fact that we had compass learning enabled almost certainly played a part. We'd done dozens of auto takeoffs in Canberra with no issues, with the plane tracking beautifully down the center line of the runway. To have it go so badly wrong for the flight that matters was a disappointment.

I've decided that the best way to fix the issue for future flights is to make the auto takeoff code completely independent of the compass. Normally the compass is needed to get an initial yaw for the aircraft while it is not moving (as our hobby-grade GPS sensors can't give yaw when not moving), and that initial yaw is used to set the ground heading for the takeoff. That means that with the current code, any compass error at the time you change into AUTO will directly impact the takeoff.

Because takeoff is quite a short process (usually 20s or less), we can take an alternative approach. The gyros won't drift much in 20s, so what we can do is just keep a constant gyro heading until the aircraft is moving fast enough to guarantee a good GPS ground track. At that point we can add whatever gyro based yaw error has built up to the GPS ground course and use that as our navigation heading for the rest of the takeoff. That should make us completely independent of compass for takeoff, which should solve the problem for everyone, rather than just fixing it for our aircraft. I'm working on a set of patches to implement this, and expect it to be in the next release.

Note that once the initial takeoff is complete the compass plays almost no role in the flight of a fixed wing plane if you have the EKF enabled, unless you lose GPS lock. The EKF rejected our compass as being inconsistent, and happily got correct yaw from the fusion of GPS velocity and other sensors for the rest of the flight.

The search

After the takeoff things went much more smoothly. The plane headed off to the search area as planned, and tracked the mission extremely well. It is interesting to compare the navigation accuracy of this years mission compared to the 2012 mission. In 2012 we were using a simple vector field navigation algorithm, whereas we now use the L1 navigation code. This year we also used Paul's EKF for attitude and position estimation, and the TECS controller for speed/height control. The differences are really remarkable. We were quite pleased with how our Mugin flew in 2012, but this year it was massively better. The tracking along each leg of the search was right down the line, despite the 15 knot winds.


Another big difference from 2012 is that we were using the new terrain following capability that we had added this year. In 2012 we used a python script to generate our waypoints and that script automatically added intermediate waypoints to follow the terrain of the search area. This year we just set all waypoints as 100 meters AGL and let the autopilot do its job. That made things a lot simpler and also resulted in better geo-referencing and search area coverage.

On our GCS imaging display we had 4 main windows up. One is a "live" view from the camera. That is setup to only update if there is plenty of spare bandwidth on our radio links, and is really just there to give us something to watch while the imaging code does its job, plus to give us some situational awareness of what the aircraft is tracking over.
The second window is the map, which shows the mission flight path, the geo-fence, two plane icons (AHRS position estimate and GPS position estimate) plus overlays of thumbnails from the image recognition system of any "interesting" objects the Odroid has found.

The 3rd window is the "Mosaic" window. That shows a grid of thumbnails from the image recognition system, and has menus and hot-keys to allow us to control the image recognition process and sort the thumbnails in various ways. We expected Joe to have a image score of over 1000, but we set the threshold for thumbnails to display in the mosaic as 500. Setting a lower threshold means we get shown a lot of not very interesting thumbnails (things like oddly shaped tree stumps and patches of dirt that are a bit out of the ordinary), but at least it means we can see the system is working, and it stops the search being quite so boring.

The final window is a still image window. That is filled with a full resolution image of whatever image we have selected for download from the aircraft, allowing us to get some context around a thumbnail if we want it. Often it is much easier to work out what an object is if you can see the surroundings.

On top of those imaging windows we also have the usual set of flight control windows. We had 3 laptops setup in our GCS tent, along with a 40" LCD TV for one of the laptops (my thinkpad). Stephen ran the flight monitoring on his laptop, and Matt helped with the imaging and the antenna tracker from his laptop. Splitting the task up in this way helped prevent overload of any one person, which made the whole experience more manageable.

On Stephens laptop he had the main MAVProxy console showing the status of the key parts of the autopilot, plus graphs showing the consistency of the various subsystems. The ardupilot code on Pixhawk has a lot of redundancy at both the sensor level and at the higher level algorithm level, and it is useful to plot graphs showing if we get any significant discrepancies, giving us a warning of a potential failure. For this flight everything went smoothly (apart from the takeoff) but it is nice to have enough screens to show this sort of information anyway. It also makes the whole setup look a bit more professional to have a few fancy graphs going :-)

About 20 minutes after takeoff the imaging system spotted Joe lying in his favourite position in a peanut field. We had covered nearly all of the first pass across the search area so we were glad to finally spot him. The images were very clear, so we didn't need to spend any time discussing if it really was Joe or not.

We clicked the "Show Position" menu item we'd added to the MAVProxy map, which pops up a window giving the position in various coordinate systems. Greg passed this to the judges who quickly confirmed it was within the required 100 meters of Joe.

The bottle drop

That initial position was only a rough estimate though. To refine the position we used a new trick we'd added to MAVProxy. The "wp movemulti" command allowed us to move and rotate a pre-defined confirmation flight pattern over the top of Joe. That setup the plane to fly a butterfly pattern over Joe at an altitude of 80 meters, passing over him with wings level. That gives an optimal set of images for our image recognition system to work with, and the tight turns allow the wind estimation algorithm in ardupilot to get an accurate idea of wind direction and speed.

In the weeks leading up to the competition we had spent a lot of time refining our bottle drop system. We realised the key to a good drop is timing consistency and accurate wind drift estimation.

To improve the timing consistency we had changed our bottle release mechanism to be in two stages. The bottle was held to the bottom of the Porter by two servos. One servo held the main weight of the bottle inside a plywood harness, while the second servo was attached to the top of the parachute by a wire, using a glider release mechanism.

The idea is that when approaching Joe for the drop the main servo releases first, which leaves the bottle hanging beneath the plane held by the glider release servo with the parachute unfurled. The wind drags the bottle at an angle behind the plane for 2 seconds before the 2nd servo is released. The result is much more consistent timing of the release, as there is no uncertainty in how long the bottle tumbles before the chute unfolds.

The second part of the bottle drop problem is good wind drift estimation. We had decided to use a small parachute as we were not certain the bottle would withstand the impact with the ground without a chute. That meant significant wind drift, which meant we really had to know the wind direction and speed quite accurately, and also needed some data on how fast the bottle would drift in the wind.

In the weeks before the competition we did a lot of bottle drops, but the really key one was a drop just a week before we left for Kingaroy. That was the first drop in completely still conditions, which meant it finally gave us a "zero wind" data point, which was important in calculating the wind drift. Combining that drop with some previous drop results we came up with a very simple formula giving the wind drift distance for a drop from 80 meters as 5 times the wind speed in meters/second, as long as we dropped directly into the wind.

We had added a new feature to APM:Plane to allow an exact "acceptance radius" for an individual waypoint to be set, overriding the global WP_RADIUS parameter. So once we had the wind speed and direction it was just a matter of asking MAVProxy to rotate the drop mission waypoints to match the wind (which was coming from -121 degrees) and to set the acceptance radius for the drop to 35 meters (70 meters for zero wind, minus 7 times 5 for the 7m/s wind the EKF had estimated).

The plane then slowed to 20m/s for the drop, and did a 350m approach to ensure the wings were nice and level at the drop point. As the drop happened we could see the parachute unfurl in the camera view from the aircraft, which was a nice confirmation that the bottle had been successfully dropped.

The mission was setup for the aircraft to go back to the butterfly confirmation pattern after the drop. We had done that to allow us to see where the bottle had actually landed relative to Joe, in case we wanted to do a 2nd drop. We had 3 bottles ready (one on the plane, two back at the airfield), and were ready to fit a new bottle and adjust the drop point if we'd been too far off.

As soon as we did the confirmation pass it became clear we didn't need to drop a 2nd bottle. We measured the distance of the bottle to Joe as under 3 meters using the imaging system (the judges measured it officially as 2.6m), so we asked the plane to come back and land.

The landing

This year we used a laser rangefinder (a SF/02 from LightWare) to help with the landing. Using a rangefinder really helps ensure the flare is at the right altitude and produces much more consistent landings.

The only real drama we had with the landing was that we came in a bit fast, and ballooned more than it should have on the flare. The issue was that we hadn't used a shallow enough approach. Combined with the quite strong (14 knot) cross-wind it was an "interesting" landing.

We also should have set the landing point a bit closer to the end of the runway. We had put it quite a way along the runway as we weren't sure if the laser rangefinder would pick up anything strange as it crossed over the road and airport boundary fence, but in hindsight we'd put the touchdown point a bit too close to the geofence. Full size glider operations running at the same time meant only part of the runway was available for OBC teams to use.

The landing was entirely successful, and was probably better than a manual landing would have been by me in the same wind conditions (I'm only a mediocre pilot, and landings are my weakest point), but we certainly can do better. Paul Riseborough is already looking at ways to improve the autoland to hopefully produce something that produces a round of applause from spectators in future landings.

Radio performance

Another part of the mission that is worth looking at is the radio performance. We had two radio links to the aircraft - one was a RFD900 900MHz radio, and that performed absolutely flawlessly as usual. We had around 40 dB of fade margin at a range of over 6km, which is absolutely huge. Every team that flew in the OBC this year used a RFD900, which is a great credit to Seppo and the team at RFDesign.

The 2nd radio was a Ubiquity Rocket M5, which is a 5.8GHz ethernet bridge. We used an active antenna tracker this year for the 5.8GHz link, with a 28dBi MIMO antenna on the ground, and a 10dBi MIMO omni antenna in the aircraft (the protrusion from the top of the fuselage is for the antenna). The 5.8GHz link gave us lots of bandwidth for the images, but was not nearly as reliable as the RFD900 link. It dropped out 6 times over the course of the mission, with the longest dropout lasting just over a minute. The dropouts were primarily caused by magnetometer calibration on the antenna tracker - during the mission we had to add some manual trim to the tracker to improve the alignment. That worked, but really we should have used a ground antenna with a bit less gain (maybe around 24dBi) to give us a wider beam width.

Another alternative would have been to use a lower frequency. The 5.8GHz Rocket gives fantastic bandwidth, but we don't really need that much bandwidth for our system. The Robota team used 2.4GHz Ubiquity radios and much simpler antennas and ended up with a much better link than we had. The difference in path loss between 2.4GHz and 5.8GHz is quite significant.

The reason we didn't use the 2.4GHz gear is that we do most of our testing at a local MAAA flying club, and we know that if someone crashes their expensive model while we have a powerful 2.4GHz radio running then there will always be the thought that our radio may have caused interference with their 2.4GHz RC link.

So we're now looking into the possibility of using a 900MHz ethernet bridge. The Ubiquity M900 looks like a real possibility. It doesn't offer nearly as much bandwidth as the 5.8GHz or 2.4GHz radios as Australia only has 13MHz of spectrum available in the 900MHz band for ISM use, but that should still be enough for our application. We have heard that the spread spectrum M900 doesn't significantly interfere with the RFD900 in the same band (as the RFD900 is a TDM frequency hopping radio), but we have yet to test that theory.

Alternatively we may use two RFD900s in the same aircraft, with different numbers of hopping channels and different air data rates to minimise interference. One would be dedicated to telemetry and the other to image data. A RFD900 at 128kbps should be plenty for our cuav imaging system as long as the "live camera view" window is set to quite a low resolution and update rate.

Team cooperation

One of the most notable things about this years competition was just how friendly the discussions between the teams were. The competition has a great spirit of cooperation and it really is a fantastic experience to work closely with so many UAV developers from all over the world.

I don't really have time to go through all the teams that attended, but I do want to mention some of the highlights for me. Top of the list would have to be meeting Ben and Daniel Dyer from team SFWA. They put in an absolutely incredible effort to build their own autopilot from scratch. Their build log at http://au.tono.my/log/index.html is incredible to read, and shows just what can be achieved in a short time with enough talent. It was fantastic that they completed the challenge (the first team to ever do so) and I look forward to seeing how they take their system forward.

I'd also like to offer a tip of the hat to Team Swiss Fang. They used the PX4 native stack on a Pixhawk and it was fantastic to see how far they pushed that autopilot stack in the lead up to the competition. That is one of the things that competitions like the OBC can do for an autopilot - push it to much higher levels.

Team OpenUAS also deserves a mention, and I was especially pleased to meet Christophe who is one of the key people behind the Paparrazzi autopilot. Paparrazzi is a real pioneer in the field of amateur autopilots. Many of the posts we make on "ardupilot has just achieved X" on diydrones could reasonably be responded to by saying "Paparrazzi did that 3 years ago". The OpenUAS team had bad luck in both the 2012 competition and again this year. This time round it was an airspeed sensor failure which led to a crash soon after takeoff which is really tragic given the effort they have put in and the pedigree of their autopilot stack.

The Robota team also did very well, coming in second behind our team. Particularly impressive was the performance of the Goose autopilot on a quite small foam plane in the wind over Kingaroy. The automatic landing was fantastic. The Robota team used a much simpler approach, just using a 2.4GHz Ubiquity link to send a digital video stream to 3 video monitors and having 3 people staring at those screens to find Joe. Extremely simple, but it worked. They were let down a bit by the drop accuracy in the wind, but a great effort done with style.

I was absolutely delighted when Team Thunder, who were also running APM:Plane, completed the challenge, coming in 4th place. They built their system partly on the image recognition code we had released, which is exactly what we hoped would happen. We want to see UAV developers building on each others work to make better and better systems, so having Team Thunder complete the mission was great.

Overall ardupilot really shone in the competition. Over half the teams that flew in the competition were running ardupilot. Our community has shown that we can put together systems that compete with the best in the world. We've come a long way in the last few years and I'm sure there is a bright future for more developments in the UAV S&R space that will see ardupilot saving lives on a regular basis.

Thank you

On behalf of CanberraUAV I'd like to offer a huge thank you to the OBC organisers for the massive effort they have put in over so many years to run the competition. Back in 2007 when the competition started it took real insight for Rod Walker and Jon Roberts to see that a competition of this nature could push amateur UAV technology ahead so much, and it took a massive amount of perseverance to keep it going to the point that teams were able to finally complete the competition. The OBC has had a huge impact on amateur UAV technology.

We'd also like to thank our sponsors, 3DRobotics, who have been a huge help for CanberraUAV. We really couldn't have done it without you. Working with 3DR on this sort of technology is a great pleasure.

Next steps

Completing the Outback Challenge isn't the end for CanberraUAV and we are already starting discussions on what we want to do next. I've posted some ideas on our mailing list and we would welcome suggestions from anyone who wants to take part. We've come a long way, but we're not yet at the point where putting together an effective S&R aircraft is easy.

Views: 8971


Developer
Comment by Andrew Tridgell on October 7, 2014 at 1:39pm

I've now pushed the changes for compass-independent takeoff into master, ready for the next release:

https://github.com/diydrones/ardupilot/commit/71d786187e42333d991b6...

The takeoff issue we had was easy to reproduce in SITL by deliberately setting bad compass offsets, and is solved with this change.

Comment by JB on October 7, 2014 at 4:24pm

Thanks Tridge. We'll give it a whirl this weekend to see how it works.

Are there any plans to add reverse thrust to auto landings atm? With the X8 the biggest issue is the super long (or a too fast approach!) on landings because of its glide angle. Reverse thrust would be the easiest way to slow her down on approach without adding any hardware, and allow a more agressive approach for landings on small clearings.

Regards.


Developer
Comment by Andrew Tridgell on October 8, 2014 at 4:20pm

@JB, I haven't thought about reverse thrust for the plane code. I don't actually know how the reversible ESCs work. Do you just use a PWM below 1500 for reverse, like in rovers? Which ESCs support this that would be suitable for an X8?

I added reverse support in rover for braking in corners, which I imagine would be similar to reverse thrust in planes.

Cheers, Tridge

Comment by Andrew Rabbitt on October 8, 2014 at 9:46pm

I have configured the ESC for my soon-to-fly Skyfun using SimonK firmware.  This puts zero at mid-PWM (1500?).

I was planning on putting an addition to one of the FBW modes that added reverse thrust quasi-sinusoidally based on pitch attitude below horizontal when at zero tx throttle position.  In theory this should allow rapid descents without allowing the airspeed to become uncontrollable.

It turns out that TJ Bordelon has been playing about with this a while back!

Anyway, here's my bench testing of the SimonK-flashed ESC.

Comment by Andrew Rabbitt on October 8, 2014 at 9:52pm

Another blog post of Tj's work here

Comment by JB on October 9, 2014 at 12:06am

Hey Tridge. Thanks for spending some of your time  to look at this, despite your busy schedule!

As you already know the two most critical and risky flight maneuvers an aircraft can do are takeoff and landing. I think we've solved the takeoff component by installing a propulsion setup that produces enough thrust to do steep climb angles of around 60-70 degrees whilst maintaining efficient cruise of around 6-8amps (4S) at 20m/s. This type of launch can easily be setup in the current parameters and allows us to launch the fully laden 4.5kg OBC X8 from fairly tight (~50m) fields with 100% confidence using a simple bungee. But with the X8 landings it feels like going down a hill on a bike with no brakes, and means we are limited to launch from tight areas but cannot typically land in them. This severely restricts the X8 deployment potential in the field, and as was also the case at the OBC (or SAR) makes us reluctant to do auto landings at all because there is no way to maintain minimal airspeed on approach with an X8 (or other efficient wing for that matter) to get accurate, reliable and repeatable low impact landings on a small landing area.

I think a fully automated SAR platform simply needs to be recoverable and land with reliable repeat-ability so that it is and remains useful at collecting information of the SAR area. This was also a lesson we learned as a team at the OBC; if we'd been more confident to auto land the X8 in the prevailing conditions, because we knew for certain it would fly again without any repairs, we'd have likely done so to either re-enable or replace the onboard odroid doing recog, or grab the images directly off the camera, which in turn would have added another tier of redundancies and would have worked despite our wifi failing and the backup 4G connection being just that little bit to slow to upload to find Joe in time! Being able to recover the aircraft intact along with the images means that every mission becomes a successful one and makes it likely that it can be done again and produce meaningful results. Limited access to a large field, and the airframe limited range because of LOS restrictions etc, make the difference between being operationally capable and not being of any assistance at all in an actual SAR mission, and was something that was highlighted in our experience together in the Snowies last year. But I digress.

There are a few ESC that support reverse thrust but typically these types of ESCs are mostly used in quads etc, and need to run a custom firmware like SimonK that has this functionality as a built in parameter. This firmware is compatible with many Atmega controlled ESCs. HK among others, have a selection of those that are suitable for an X8 and need to have the parameter enabled in the Simonk firmware. A car ESC works as well if calibrated properly and yes, the easiest way to command this is simply by using a low PWM to the ESC like on a car, first for braking and then reversing. Complementary pwm use is advised for this. We've also used the reverse thrust on the TX rudder stick channel (no rudder on a X8!) for manual use which keeps it separate from normal thrust control and doesn't limit the TX throttle movement range, but at times it's difficult to gauge airspeed on a steep decent so having airspeed in the loop I think would help greatly and is of course essential for auto modes. It's also a good way to test the effectiveness of reverse thrust and various propulsion setups without to much effort. Ideally though we'd be using something like the ESC32 with a slightly modded firmware as that already has built in motor and current monitoring, and even a regen mode, plus CAN connectivity etc. for full feedback control.

I'm not fluent enough to describe how the code should function, but I was thinking along the lines that TECS could be potentially used to maintain a balance between forward momentum and altitude to follow a proposed glide slope to target landing WP, but regardless (within reason and aircraft flight envelope) of the attitude of the aircraft by using forward/reverse throttle control. (This might be extended to support a quad/plane VTOL hybrid flight transitions as well) The approach itself will likely need a low altitude, close to ground WP where the aircraft "recovers" after the "steep decent reverse thrust" component and then glides to the ground as it normally would without power. There might be some call for some tuning parameters to be included that restrict the range of reverse thrust used, the min/max airpeed (stall) or decent gradient, or how aggressive it responds at what altitude (progressively less steep approach) and possibly take other factors like ground speed into account to compensate for wind drift etc to maintain the accuracy of the approach to the landing WP. Restricting steep bank angles might also be useful to factor in to avoid tip stalls and trigger auto aborts on approach, which tends to be prevalent in wings especially on approach. Lidar for the flare calculation would of course be the icing on the cake! I believe being able to determine the speed, altitude and attitude of the aircraft at a certain point in space before touch down would get rid of quite a few variables that lead to undesired landings.

Having an universal "Landing Retarder" function that can output and command a separate RC channel could allow the code to steer flaps, airbrakes, parachute, drag chute, split elevons or a variable pitch prop on a fuel powered aircraft as well with the similar effect. It could also be extended to act as a "safety" feature in that it activates to reduce impact inertia on "unintended" steep approaches whilst in manual mode as well for training purposes etc. Triggers for alternative retarder methods could then also be incorporated as well, so that a parachute for example would only deploy in certain situations, like when the PXH can no longer maintain flight control. Overall I think it is a worthwhile addition to the Arduplane code to enable more speed control for landings which will make the landings safer, easier to control overall and minimize speed related damage to the aircraft. It would also be useful for flight termination as was required at the OBC. Generally having an airframe specific and "tuneable" takeoff and landing sequence onboard would allow for more consistent results. As a bonus it will make it very easy to implement on nearly any electric aircraft flying ardupilot even those that have long glide ratios, and essentially good range. The extreme version would of course prop hang! ;)

Thanks again for all your awesome work and I hope this makes it to the arduplane development list! If so I'll happily dedicate myself to assist in any way I can including flight testing on a selection of airframes as I think this is one of the most crucial missing components to an already great autopilot platform that is proven capable of doing SAR.

Regards 

Comment by JB on October 9, 2014 at 12:13am

Thanks Andrew R. for those links. Our posts crossed!

I should refresh more often when l do long posts! ;)


Developer
Comment by Andrew Tridgell on October 9, 2014 at 3:36am

@JB, you've convinced me it's worth a try at least. Can you open an issue for this? If someone else wants to contribute a patch then looking at the braking code in rover would be a good start (it uses reverse to do sharp cornering). There are two parameters BRAKING_PERCENT and BRAKING_SPEEDERR.

BRAKING_SPEEDERR is the minimum amount of overspeed before it considers braking. BRAKING_PERCENT is the amount of braking to apply (multiplied by the speed error). The code is here:

https://github.com/diydrones/ardupilot/blob/master/APMrover2/Steeri...

For a plane it would work similarly I think, using the amount of airspeed error. It could cause some interesting issues with backwash over pitot tubes if they are mounted on the front of a pusher plane (eg. an X8). Maybe you'd need to mount the pitot out on the wing.

This could either be built into the AP_TECS library, or could be a add-on in the main vehicle code.

Note that 3.1.1 has a LAND_FLAP_PERCNT parameter to control landing flaps, which could be used if you had flaps. That would be a much more conventional approach. It would be fun to try reverse thrust though!

Cheers, Tridge

Comment by JB on October 9, 2014 at 8:03am

Sweet. Thanks Tridge.

I'll open an issue as instructed. Thanks for the info on the flaps code I didn't notice that one on release, however regrettably for a wing it's not possible to use flaps as they haven't got a tail with elevators to counteract the flap enduced pitch change. Split elevons, like the Dutch OBC team or B2 use would work though, but are significantly less effective and more complicated to install and use than reverse thrust! I'll keep an eye on the pitot tube placement with reverse thrust, but I'm hoping it shouldn't disturb the airflow to much whilst still flying forwards, if it does I'll move it. 

Comment by Tomas Soedergren on November 12, 2014 at 9:38am

JB and Tridge - Re: X8 landing performance

While I welcome the initiative to explore thrust reversing to achieve a steeper glide path (and shorter ground stop distance on planes with landing gears and electric drive line) I would like to point out that using FLAPS for landing works exellent with the X8 - this is my own experience. So far made only two, but two successful, auto landings in gusty wind. The flap system provides a steeper descend angle on final, shorter landing distance and reduced risk of tip stall. Though on the X8 it requires modification to create the flap panels, fortunately an easy modification.

UX5, the latest mapping drone from Trimble, uses reverse thrust to obtain steep descend angle:

http://uas.trimble.com/ux5

/ Tom

My X8:

http://youtu.be/rnbnatoBcl8

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2018   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service