All Posts (14029)

Sort by
3D Robotics

Q: In a nutshell, what's the AttoPilot project about?

The AttoPilot is about bringing a full-function powerful autopilot to a larger crowd, for many of whom "DIY" is not their interest, just something they can buy and use. I myself wanted to buy a hobby autopilot in late 2005 knowing nothing except "I'll look on the Web, surely I'll find something".

I was SHOCKED that the cheapest unit that met my basic requirements costs $900 (the UNAV NAT). Everything I found was either really heavy/expensive, or home-made looking. I knew there was much room to improve, even for somebody like me with no idea (at the time) how to make something like an autopilot that I would let fly my airplane.

So, I started out only with ideas of how it "should" work with no ideas of what roadblocks the hardware/coding might have in store for me. I think this naiveness was an asset. Features of initial release hardware rev (other versions coming, some smaller):
  1. Low weight: 36 grams (1.25 ounces) is the TOTAL added to your airplane including GPS and antenna and sensors. 50% of that is just the CoPilot head and cable (18 grams). The IMU version is lighter, mass = 22 grams total system. [note: the picture here are of prototype hardware]
  2. Small size: Atto is a single package of about 25x20x10mm with right-angle header connections to the servos, like a small 6 channel Rx. The only external pieces are the GPS module and CoPilot sensor head.
  3. Rx interface: Atto reads the channels from your Rx through the servo jacks, no soldering to the Rx board. Atto is designed to read 3V/5V logic signals automatically, and ANY pulse order from the Rx, be it overlapping or serial, Atto doesn't care. It reads with precision to < 1us.
  4. Integrated data logging to micro SD card, and USB jack for interface to your PC. Currently in testing, I have about 20 parameters logged at user-configured sampling rate, up to 5 Hz. You can view the flight history in Google Earth as a 3D path, plus analyze many details of the flight, including how the autopilot performed, and what it was "thinking" at the time. You can use this data to know precise location of aerial photos.
  5. 100,000 waypoints that are 3D. Make your flight plan graphically in Google Earth, then use simple formatting application in Windows to hand-edit the path, and add XOR checksum to each line. The finished file goes on the SD card via USB jack.
  6. No limit to distance between waypoints, or to how far the unit can fly from starting point. Proven to work in all hemispheres (NS, and EW of the prime meridian)
  7. Intergrated static and dynamic barometers with special data filtering routines, for baromteric altitude control (that is corss-referenced against GPS altitude) and pitot tube airspeed.
  8. Currently 8 triggerable events at each waypoint. User-configurability what the 8 events are, be it position of auxillary servos, etc... Useful for Aerial Photography.
  9. Drives up to 6 servos, with user-configured mixing of servo outputs (for V-tail, elevons, or even just 2 seperate aileron servos)
  10. Telemetry jack to the XTend modem by Maxstream. This is currently being developed but is on track. Working now is the downlink to moving map software, but goal is 2-way com to take R/C control at long range, upload new waypoints, and issue commands such as RTL.
  11. Stability/attitude control: FMA's CoPilot thermopile head is used in a special way: By fusion of the thermopile data with barometer and GPS data, a very robust and self optimizing system is created. Auto trimming during flight, auto gain adjustment.

Pricing - I can't comment on this yet, as I don't know what it will cost to have these made by a professional assembly house (I will not sell a hand-made AttoPilot, people deserve at least the same quality as is in their TV remote). I can tell you that my desire is to sell something 20x more functional than anything else currently available, and for about HALF the price of other hobby autopilots. Many people are not interested in DIY, but rather plug-and-play. Of course, there is much more to a drone than the autopilot, so I think these people still fall in the spirit of "DIY Drones".

Schedule of availability: Summer 2008. Beta testing is still going strong with rapid roll-outs of firmware upgrades, and hardware add-on PCBs that will be integrated into final production version.

Q: What's the origin of the name "AttoPilot"?

Well, you have micro and pico pilots, and they refer to the SI notation for millionth and trillionth (1x10e-6 and 1x10e-12), respectively. I have talked to Curt Olson off and on, so I know that guys like him don't need the tiniest autopilot, but for people that do, I'll offer something smaller than the "pico" pilot. "Atto" is SI notation for 1x10e-18, or a millionth of a trillionth. Also, Atto is like Auto, so instead of an auto-pilot, you get the name Atto-Pilot!


BTW, the Beta Attopilot uses pretty big DIP processor and breakout boards from sparkfun.com, but even then with GPS the unit comes in at 1.6 ounces, which is already HALF of the Pico Pilot's assembly of PCBs to make their complete system. By going to a NEWER GPS besides the ETek unit and having Atto be a 100% SMD [surface mount device], the mass drops down to the figures I quoted, in the < 1 ounce range.


In addition, I have a 6 gram "shrink" version with integrated Kalman IMU on the same PCB with integrated GPS and antenna and other sensors. This version is suitable to fit INSIDE the wing of a tiny MAV (like a 6" span plane). This is a work in progress, but I suppose my point is that my autopilots will TRULY be Atto in sized over this coming year.


[You can read more about this project and its inception in these two RCGroups threads. Latest beta details. The origin of the project emerged in this thread]


[Part 1 of the interview is here. Part 3 is here.]

Read more…
Developer

Final version of ArduIMU (Include Video)

After a few more tests, I now have the final version of the ArduIMU. The main problem was the calibration of the gyros: converting the raw data to degrees/s. I had to change the source code, compile it, upload, and repeat it over and over again. Totally waste of time.So i configured Arduino to externally adjust the gyros and the accelerometer jitter filter, using the nunchuck potentiometers (joystick axis). Now it’s working pretty well. =)Please, in the source code, ignore all the lines marked with "//(kalman)", this lines are related to the kalman filter code, those marked by "//(nunchuck) are related to nunchuck accelerometer and axis control, and all marked by "//(gyro)" are code related to the control of the gyroscope.Another tip, all the data is processed in radians, then is converted to degrees (radian*180/PI = radians*57.295779= degrees). If somebody needs any help, please feel free of asking me.Video:-Yellow color = Just Accelerometer read-Red color= Accelerometer+Gyro+KalmanFIiter
Arduino Source Code: http://sites.google.com/site/jordiuavs/Home/kalman_filter_by_Jordi.txt Labview (ground station) source code: http://sites.google.com/site/jordiuavs/Home/kalman_filter_by_Jordi.txt

To do: -Integrate another Gyro (to make it dual axis) . -Program the SPI altimeter.-And put all the hardware together and make it work (GPS, Altimeter, IMU, Modem).

Done: -Servo control -Gps Waypoint navigation (I’ll post later about it).

The next article I'm going to post all the links to get started with Arduino environment. All you would need to learn: tutorials, examples, references, tips and tricks. If you already know C++ it’s going to be 99% easier. If not, you are going to discover why C/C++ is the most compressive and famous language ever ;-) Then jumping to any AVR, ARM, PIC microcontroller C/C++ language environment, is going to be a lot easier, because they’re all based on the same principles.

Read more…
3D Robotics
Every member of DIY Drones has the ability to create their own blog posts and discussion topics (Please do! This is a community site and its vitality depends on your participation).

Doing so is easy: click here to create a blog post or click here to create a discussion topic (those authoring links are also at the bottom of the respective columns on the front page). But which should you choose?

The following are some guidelines and tips on effective posting on DIY Drones, whether you choose discussion topics or blog posts.

First, the practical differences:

Blog posts:
  • Have more layout control, including inline videos, along with some basic HTML layout and typography tools.
  • Are included in the RSS feed. So when you post a blog entry, subscribers to the feed can read it, even if they don't visit the site. That means more readers.
  • Are posted in the center column in chronological order, and stay in that order.
Discussion topics:
  • Have fewer typographical and layout options
  • Can include files
  • Are not included in the RSS feed
  • Rise and fall on the list at left based on activity (responses)
When should you create a blog post versus a discussion topics, or vice versa?

Blog posts are best when:
  • You have something to say, be it a project report or an interesting bit of news found elsewhere
  • There are pictures or videos to illustrate the point
  • You're writing something longer than a hundred words or so
Discussion topics are best when:
  • You have a question or need help
  • You want to raise an issue for discussion
  • You want to share some code and solicit comment
Finally, here are some tips for good blogging:

  • Start with an image (put the cursor before the first word on the page, and insert an image there). Click the image icon, upload the image and select the following options: wrap, right, 300 pixels (or 200). This will ensure that the post excerpt on the front page shows an image, which is important if you want to be read.
  • Turn all your URLs into proper links by selecting the text you want to be the link, clicking on the chain icon, and pasting the URL in the box.
  • If you edit the blog post later, pick "Choose a date and time" and then enter a date and time close to your original posting time. The ensures that your blog post will stay in chronological order, and not jump to the top of the list regardless of the original posting date.
  • Images are good. More images are better. Video is better yet.
That's pretty much it. Happy posting!




Read more…
3D Robotics
Okay, now that we've connected our PC to our autopilot's GPS input, let's let Curt walk us through all the things we can do with FlightGear. Starting with this:

Q : Is there a way to use FlightGear to plot a course (waypoints) and have it output GPS to an autopilot as the simulated plane in FlightGear flies that course? Or even output GPS while you fly the plane in FlightGear manually?


For any flightgear flight (manually flown, flown on autopilot, etc.) you can continually output GPS sentences over a serial port. There is a command option that let's you choose exactly what you want to do:

It looks something like:
<protocol>=<channel>,<direction>,<hertz>,<channel_specific_options>

So if you wanted to output standard NMEA strings on COM1 at 1hz using 4800 baud, you would do this:
nmea=serial,out,1,COM1,4800

With this option activated, you can connect a variety of moving map applications, or a PDA, or just about anything else that is capable of connecting up with a gps and doing something. For instance, there is a palm pilot moving map application designed for flying and this can be slaved to a running copy of FlightGear. Here's a video of that:

This particular flightgear command line option is very powerful and allows you (with different configuration choices) to send data through a network socket to another application. FlightGear can also receive data through this same mechanism and draw the view based on the external data.

So you could plug a real gps into flightgear and have flightgear draw a 3d view based on the gps data.

Or you could slave one (or more) copies of flightgear to a single master copy (sending and receiving the data at 60hz over a network connection) and create a multi-monitor distributed simulator system with a wrap around visuals. That's what we've done here:

File input/output is also an option for this command so for another example you could fly some route, log the nmea/gps strings as you fly to a file (instead of sending them out a serial port), and then have that file to feed that back into your autopilot system later for testing and development.

This all might sound a bit crazy, but one of the design goals of FlightGear is (as much as possible) to be open, flexible, and adaptable to new situations. Exposing a variety of ways to send and receive data allows FlightGear to be adapted to a huge variety of uses.

So just to summarize ...

OUTPUT:


FlightGear can be used to drive external moving maps, external gps's, even other copies of FlightGear to create a multi-monitor wrap-around visual configuration.

INPUT:

FlightGear can take input from a variety of sources so it can be slaved to an external GPS or IMU outputting data in real time, it can be used to replay a saved flight. I talked to an engineer that had ride in the back of a 767 with no windows and wanted to interface FlightGear to the live 767 systems so he could see what was going on while he flew.

MULTIPLAYER:


Don't forget the FlightGear multiplayer system. If you are slaving FlightGear to some external data source, but have the same copy of FlightGear connected to the FG MP system, now suddenly you have the possibility of injecting a real, live flight into the virtual FlightGear world and anyone else logged onto the multiplayer system, can watch your real UAV fly in the virtual world. They can come up and fly with you, etc. If you have multiple uav's registered in the system they can all see each other in their respective synthetic views, they can be highlighted/circled on the hud overlay, etc. It's almost a little mind bending to think about, but it's all there and has been succesfully demoed. Oh, and we have a google maps based multiplayer map system, so if you do have your live UAV flight registered in the MP system via a local running slaved copy of FlightGear, anyone with a web browser, anywhere in the world can track your UAV live.

I don't know if all of these things are immediately useful to everyone (or may never actually be useful for anyone) but sometimes you do things just because they are cool and fun and because you can, and somewhere, someplace you hope you've scored a couple geek points. :-)

[In our next installment of this series, we're going to drill down on connecting FlightGear to an IMU...]

Read more…

X3D SRV1Console

After some distractions and delays, my uav projects, first described here - are back on track. Yesterday, I received the needed firmware update for the AscTec (X3D) quad - it's now possible control the quad via the serial interface with no FM transmitter in the loop. As before, the controller is an SRV-1 Blackfin Camera Board, which includes a 500MHz Analog Devices Blackfin BF537 processor with 1.3 megapixel Omnivision OV9655 camera module and Lantronix Matchport 802.11b/g radio. The firmware changes were critical for defining the quad's behavior upon loss of signal from the Blackfin.

As mentioned before, I have been working with a new quad rotor called the "X-3D-BL Scientific" from Ascending Technologies GmbH in Stockdorf, Germany, with the concept of integrating the SRV-1 Blackfin camera and radio board with the UAV flight controls. Interface is relatively simple - the X-3D-BL has a very capable onboardinertial measurement unit integrated with the brushless motor controls,so the interface between Blackfin and UAV is a simple 38kbps UART.


My original robot firmware required only minor changes, and I added flight controls to our Java console that was originally designed for a ground-based robot. The 3 columns of buttons on the left are assigned to pitch, roll and yaw control, and the buttons further to the right change or kill the throttle or initialize the controllers. The last column changes video capture resolution. The Java software has an archive capability which I exercised here -

http://www.surveyor.com/images/x3d-srv1-012808.mov

This particular video clip isn't very exciting, as I never take the quad more than 1-2 inches off the ground, but it does show the live view from the quad via the WiFi link and is 100% under control via WiFi from a remote host. There were some pretty good crashes earlier, but unfortunately I wasn't running the archiver at the time. I need to fine-tune the flight controls, and then will hopefully capture some more interesting video.

While this project is furthest along, I now have firmware for the Blackfin board that can either control the airframe via serial interface (e.g. the X3D) or 4 servo channels. The next flyer will be my "baby Hiller" coaxial fixed rotor which steers by shifting battery weight, and then I will start working with the fixed wing "Carbon Prime". It's nice to be making progress again on these projects, and now that I'm back in it, everything else feels like a distraction.
Read more…
3D Robotics
Matt Chave's CUAV project is a great example of custom autopilot building, but one of the most interesting parts about it is that it uses magnetometers--digital compasses--in addition to the regular accelerometers and gyros. In the second part of our interview (the first part is here) I asked him to explain why:

Q: Can tell us a bit more about why you chose magnetometers and what you've learned working with them?

A few reasons:

We chose magnetometers because the earth's magnetic field is well modeled and provides a very good inertial reference to the aircraft.

We believed that the world magnetic model would provide sufficient coverage and accuracy for us to use for the inertial references. So far we've found it acceptable.

The magnetometers available are capable of detecting rotations to hundredths of a degree. We're not that accurate but we only need a few degrees accuracy for the current airframe.

We didn't believe it would be possible to calculate the attitude using a kalman filter with gyros and accelerometers because of the clock cycles this would consume and without floating point maths the solution could become to inaccurate.

We were concerned with the possible noise issues, however we found that most of the noise was high frequency and easily filtered out.

Since we're concerned with the frequencies below 50Hz the main problem is calibration for soft iron effects in the aircraft. These are calibrated by rotating the aircraft (easy while its this small) and recording the results then applying the appropriate bias' and offsets.

The magnetometers provide highly repeatable results with a long lifetime, they're very small, consume little power and perhaps could allow us to fabricate a 'system on a chip' solution.

They're also successfully proven on LEO (low earth orbiting) satellites.

Q: Why couldn't you use the magnetometer for roll, too? Indeed,with enough processing power could you imagine a full-featured autopilot with magnetometers alone?

Unfortunately, a single measurement of a vector can only provide two axis of information. There are papers out there which describe using Kalman filters to include dynamics and capture multiple measurements to resolve the third axis without requiring another sensor however as far as I'm aware these algorithms are only used on LEO satellites and require the collection of data from a complete orbit to provide a solution. We also don't have a gravity gradient stabilized airframe which they rely on, so we choose a roll stabilized airframe and assumed that the roll would be zero (of course that's not true but it helps to make it possible).

A good place to read more about the many different attitude determination algorithms is some of the papers available through Google scholar.

To explain it for those of us who are more visual the image shown here [at the top of the page] shows a single vector measurement viewed in the inertial frame. You can imagine the x,y and z field measurements which the plane would make in the body frame.

If you imagine the airframe glued to a cone shape centered around the vector with its apex at the tail of the airframe. Then if you were to rotate that cone about the axis (the vector), the measurements of the vector in the aircraft's reference frame would be identical. So there is mathematically no unique solution for the attitude without further information (eg. zero roll). If we add in another vector we are able to remove this dilemma.

Read more…
3D Robotics

Interview: Matt Chave and the CUAV project

For the second interview in our series, we'll go to Australia, where aerospace engineer Matt Chave has been developing a very interesting open source UAV that uses magnetometers in an interesting way (and he's using FlightGear for simulation!). Called CUAV (formerly Apeliotes), it's pretty much state of the art for amateur UAVs and is full of good ideas that we can all use.


Q: Can you tell us a little about yourself, day job, background, family, education, hobbies, etc?

I've always had a fascination with aircraft, physics, maths, programming and electronics among other things and couldn't think of a more perfect way to combine them all than an aircraft flight control system. While I was an undergrad I managed to convince my supervisor at uni that we could make an R/C autopilot for my honors year...(actually it didn't take much convincing he's a bit of a modeler himself). So I did plenty of reading over the summer holidays while I worked for a navigation company in New Zealand called Navman, and was ready to hit the ground running the following year implementing it. The original design was called the Apeliotes project and included the hardware and software to fly a small polystyrene plane. The controller used an arm7 micro on a custom board:

The only sensor was a three axis magnetometer to determine the error in pitch and heading from the desired attitude assuming the roll angle was zero. Apeliotes flew fairly well and managed several successful test flights switching between Remotely piloted and autopilot while in flight and recovering from any attitude our test pilot could put it in back to straight and level flight towards geographic north. you can see the video here.

I haven't given up yet on the magnetometer-only design yet, but it was difficult to make it fly anything other than north and level. Not that there's anything special about geographic north for the magnetometer especially in Dunedin where we were testing (inclination 70+degrees, declination about 20degrees). But I was taking too much time and we needed to detect roll anyway to fly more sophisticated models. So when I continued the project as a masters with the uni we added in a three axis accelerometer and toyed with a few computationally inexpensive attitude detection algorithms including; downhill simplex minimisation of Wahba's loss function, Triad, QUEST, etc...

A new controller board was created (below) with the lessons learned from the first along with many other developments and the project changed its name to the CUAV (custom unmanned air vehicle). Yeah, I was sick of the Apeliotes name :)

I've now moved to Australia with the project landing me a great full time aerospace job and am continuing the open source development of the CUAV in my spare time.

Q: What's unique about CUAV?

I'll try to summarise some of the major differences:

  • Cost: Using only magnetometers and accelerometers for the attitude detection and efficient algorithms on a an arm7 micro reduces the cost significantly over other gyro based systems, and the parts are also readily available worldwide.
  • Failsafe: A CPLD provides a thoroughly tested failsafe uav/rpv switching mechanism which is separate to the flight control computer so that control can be recovered at any time. The switch was required to be very immune to noise since we had some very low quality R/C gear with only a very small range so we've removed any chances of the state changing erroneously. The plane could even operate without an r/c receiver attached although this almost certainly would be disallowed in most countries it's handy when developing on the ground that you don't require the r/c transmitter to be switched on or receiver connected.
  • Attitude detection solution: efficient attitude solution which means there's more time to calculate the guidance etc and should be capable of flying aircraft which have high dynamics.
  • COTS friendly board design: There are no modifications required to plug in your R/C gear. I didn't want to require anyone to pull apart their receiver and solder wires which would void their warranty's etc. Also CUAVs power is sourced from the same lines that the R/C receiver, servos and speed controller get their power from so the power consumption of the entire system is easily monitored.
  • Low power consumption: 375mW including GPS and sensors with an acceptable input voltage between 3.6 to 5.5V
  • Weight: 50g including GPS and sensors; Size: 10cmx4cmx2cm
  • Open source: Though the source isn't released yet it will be soon I'm very conscious of the RERO (release early release often) mantra and am looking forward to getting it out there soon.
  • Flexibility: The custom board combined with the arm7 provides plenty of room for further additions of sensors etc.
  • Custom FDM: Custom physics model for the aircraft. This provides the ability to do hardware in the loop testing on the ground. Output to flightgear for visualization and networking. We found writing our own physics model for an aircraft helped with understanding the dynamics involved.
  • Data analysis: We use a matlab GUI to help tune and test the system. This should eventually be made from c/c++ so that you wont require matlab.
  • Compiler: gcc(++) arm7 development, using linux so that no proprietary software is required.
  • Custom fixed point math library: including vectors, rotation matrices, quaternions and trigonometry using 16.16, 32.32 and a 64 bit integer accumulator class. When we started the apeliotes project there wasn't any available math libraries for the arm7 (or we couldn't find any) and since there is no built in divide available on the arm7 we needed to build our own.
[In the next installment, Matt talks about the advantages and disadvantages of magnetometers]

Read more…
3D Robotics

GPS simulation hardware setup, part 2

In an earlier post, I discussed some of the options for connecting your autopilot to your PC for GPS simulation. I've now tried several of the options on our Basic Stamp autopilot, and I have some recommendations on what to do (and not to do).

Rather than try to re-use your dev board's serial connection to the PC for GPS simulation (an option I discussed in the earlier post), I recommend that you get a USB-to-serial board like this one ($20), and run two serial connections simultaneously. This is because one of the serial connections is going to be used by the Basic Stamp as a terminal monitoring the autopilot and incoming data. The second one is going to be used by the GPS simulation software. You really need both, especially if you're doing any debugging.

With the above USB-to-serial card, it's a little unclear what pin does what (the boards don't come with instructions). The picture below shows which wires you need. Basically, with the USB connector at the bottom, you want the first three pins from the top on the right side. They are, in order, ground, V+ and "Rx". (confusingly, the terms Rx and Tx depend on your perspective of what side of the serial connections you consider the "sending" side. In this case, this is the same pin that we call "Tx" on our GPS module.

We're going to plug this board in the exact same rows on the breadboard as we used for the real GPS module, so it's simply a matter of unplugging one and plugging in the other to do a simulation. This is what it looks like on my board (click to get a larger picture if you can't read the text)


When you plug in that USB to serial adapter, Windows should recognize it immediately and load the right driver. Windows will assign the board a serial port, but odds are it will be pretty high one (I usually get 10 and above). To find out which one it got, go to the Windows Control Panel, System, Hardware, Device Manager, Ports. It should show that you've got a new "USB Serial Port (COM X)" where X is whatever port it's been assigned.

Now you need the simulation software. I found a good free one from FlyWithCE here. Fire it up, and see if you can select the port that your USB to serial card has been assigned to. If that port isn't listed, you need to remap the card to a lower port. The easiest way to do that is back in Windows Device Manager. Select the port, and click Properties, then Port Settings, then Advanced. Select the lowest port you can. Click Okay, unplug the cable and plug it back in again. Now when you check Windows Device Manager, it should be listed at the new port numbers.

Now go to the Basic Stamp IDE and run the GPS test code from this post. The Debug terminal should show that the code can't find a GPS signal. Now go to the GPS Simulator. Enter a starting lat/lon and press start. If you set everything up right, you should start to see NEMA sentences. Click on the Move and/or Circle buttons and enter some speeds, and those GPS readings should start to change.

Now you've got GPS simulation working! If you fire up your autopilot, it should start to navigate according to these [fake] readings. But it's more fun to see what you're doing, so we'll replace the FlyWithCE simulator with Flight Gear in the next post.

Read more…
3D Robotics
[In the previous installments of my interview with Curt Olson, I focused on using FlightGear for GPS simulation--you fly the plane around in the flight simulator and the software outputs GPS data to fool your autopilot into thinking that it's the plane in the flight simulator, so you can see how the autopilot responds to being off course. Now we're going to turn to more sophisticated uses, with sensor data from your UAV (either live or recorded) driving the simulated plane on-screen. This is pretty hard-core stuff and not for everyone, but for those of you who want to really push the envelope, here's Curt, straight from the source. -ca]

FlightGear has the ability to turn off/disable its built in flight dynamics engine (i.e. the built in physics model that computes the simulated motion of the simulated aircraft.) Once the built in physics are turned off, you can feed in a sequence of locations, orientations, control surface positions, and other data from some external source, and FlightGear will dutifully render that data. If you connect up FlightGear to a data stream from a real time flight, or a data stream from a replay of a real flight, and feed in a smooth sequence of changing positions and orientations (hmmm, I work at a university so I should probably call this a sequence of locations and attitudes so as not to offend) :-) then FlightGear will render a smoothly changing view based on the data you are sending.

This turns out to be a very powerful and popular use of FlightGear. You can watch the FlightGear rendered flight from a variety of perspectives, you can examine the motion of the aircraft relative to the control surface deflections, you can leave a trail of virtual bread crumbs in the sky to see your flight path, you could include a sphere at the location and altitude of each of your waypoints to see how the autopilot tracks towards them, etc.

One thing I working on now (in partnership with a company in the LA area called LFS Technologies) is a glass cockpit display that shows even more detailed and interesting information. It's not completely finished, but it would connect up to flightgear and serve as another view into the guts of what the autopilot is thinking and trying to do. I don't know if I ever posted this link publicly, but here is a short/crude movie that shows the "glass" cockpit view in action ... low res and chunky, sorry about that.

Hopefully you can see there is a map view of the route and waypoints, there is a compass rose, a vsi (vertical speed indicator), an altimeter tape, and a speed tape. On each of these you can place a "bug". I.e. a "heading bug" would sit on top of the target heading, the "altitude bug" would sit on top of the target altitude, etc. And then in addition to all this, the actual control surface deflections are shown in the upper right.

What this gives you is a real time view of where you are at, where you should be going, and what the autopilot is trying to do to get there. For example. Let's say the target altitude is 1500' MSL and the UAV is currently flying at 1200' MSL. The altitude bug should be planted at the 1500' mark on the altimeter tape so we know where we are trying to go, and the altimeter will show the current altitude. If there is an altitude error, the autopilot should want to climb to fix that. So the VSI will have it's bug positioned to show what the autopilot wants the target rate of climb to be ... maybe 500 fpm for this example (and the rate of climb should diminish the closer you get to the target altitude so you intercept it smoothly.) Now you can look at the VSI and see what the target rate of climb is compared to the actual rate of climb. If there is a difference between the two, the autopilot will try to adjust to aircraft's pitch angle to increase/decrease the actual rate of climb. The primary flight display (PFD) shows actual pitch and roll, but also contains flight director style "vbars" that can be used to show the desired pitch and roll angles. Finally, in the upper right, you can see an indication of the actual elevator position that the autopilot is commanding to try to achieve the target pitch angle. Easy huh? :-) I actually use the word actually a lot I guess, basically feel free to edit that down a bit, and basically, "basically" is another word I actually use too much. Good thing I'm a software guy, and not a journalist!

So in other words, elevator deflection is used to achieve a target pitch angle. The pitch angle is varied to achieve a target rate of climb. The rate of climb is varied to achieve a target altitude. And the glass cockpit display in combination with FlightGear shows *exactly* what the autopilot is thinking and attempting to do for each of these stages of the altitude controller.

And the story is similar for the route following. The autopilot uses aileron deflection to achieve a target bank angle. The bank angle is varied to achive a desired heading. The heading is varied to fly towards the next waypoint. And again all of these can be seen changing in real time as the aircraft flies (or in a replay of the flight later on if you are trying to analyze and tune or debug the autopilot.)

So how does this help anything? If you are actually working on designing an autopilot and doing some test flights, then (as an example) maybe you see that the target pitch angle is never achieved. Then you notice from the visualizer that you run out of elevator authority before you get to the target pitch. Or maybe you could see things like excessive elevator motion as the system continually over compensates trying to seek the desired pitch. Then based on the specific observed behavior, you can tune the gains or limits of the autopilot system to fix the problem. Maybe you need to reduce the sensitivity (gain) of the elevator controller to eliminate the over compensation, maybe you need to increase the limits of the allowed elevator motion to be able to allow for greater pitch up or down angles.

As another example, maybe very early on in your autopilot development, you observe the aircraft always enters a loop when you activate the autopilot. From the FlightGear visualizer system, you might see that the elevator is moving the wrong direction so that more and more up elevator is commanded as the pitch angle gets higher and higher. You might realize that the direction of elevator deflection should be reversed inside the autopilot so less and less up elevator is given as the pitch angle increases.

The alternative to having tools to replay and visualize what is happening is that you fly, something strange happens, you land, scratch your head and try to figure it out, make some stab in the dark change, take off, see if it helps, it probably doesn't, you land, reverse the change you made earlier or make the same change but in the opposite direction, take off, more weirdness, more head scratching, etc. The more tools you have to see and understand the process of exactly what the autopilot is trying to do, the better off you are when it comes to making adjustments to make the autopilot even begin to work, or later to refine the behavior and make it work better or converge faster.

You could even take a step backwards in the pipeline. All of the above presupposes that you know your location, and know your attitude (pitch, roll, yaw.) But all those values are based on noisy and imperfect sensors and probably process through a variety of filtering and interpretation algorithms. It may be that you are trying to debug and tune the algorithms that tell you what your roll, pitch, heading, location, rate of climb, etc. are. In this case, a visualization tool is still very useful because you can compare what the computer is showing you to what you are seeing in real life (or what you remember seeing.) You can look at a real-time 3d visualization of the sensor data and say "that aint right", or "that looks about right". Either a plane looks like it is flying right or it doesn't. If it is flying along on a straight and level path but the nose is pitched up 45 degrees in the visualizer, you immediately see that something is wrong. If it is flying along on a straight and level path, but the tail is pointed forward, you can see something is wrong. A visualization tool like this can help you see and understand what your sensor integration algorithms are doing and possibly when and where they are breaking down. And all of that is useful information to point you towards the exact location of the problem and hopefully help you find a solution.


Read more…
3D Robotics

Blimp UAV: handling navigation indoors

The blimp UAV project is moving along nicely. We've picked a general size and selected most of the components. We've now decided to go with a PIC processor, rather than the Stamp, Propeller, or ATMega168 that we've been using here to date because you can get PICs in faster speeds and with more memory, something we'll need for expandability down the road. My partners in this project also have a lot of custom PIC development tools, so since they're going to do most of the work I'm happy to take their suggestion.

The big deal this week is that our IR navigation system, Evolution Robotic's NorthStar, arrived. This is the DevKit, which isn't cheap ($1,800) but we only need one and the idea is that we can build a board that has the receiver built in and a cheap transmitter kit. We're hoping to keep the cost to users of both under $100.

The kit that arrived is for NorthStar 1.0, and now that Evolution Robotics has announced Northstar 2.0 at CES, I can disclose that that's actually what we've been planning to use all along. But the dev kit for that one isn't ready yet, so we're building the first prototype on 1.0 and then upgrading the necessary parts and code when 2.0 arrives.

Here's the basic overview of how NorthStar 1.0 works:


And here's what's in the dev kit:

First, it comes in two nifty cases, one for the IR transmitters (shown above) and one for the IR receiver:


Here's what's inside the transmitter case (click for bigger picture and readable text):


Here's what's insider the receiver case:


Here's what the receiver looks like, attached to the PC interface that's used for testing:

The receiver module, which is the slim black box on the right, weights about 12 grams. It's got a bunch of directional IR receivers inside. The fixed IR transmitters (pic at the top), project beams at unique frequencies on the ceiling, and the receiver tracks them and outputs x and y position and a directional vector. You can think of it as a very high resolution (2-3 cm) GPS replacement for indoors.

Here's what the receiver looks like placed on the BlubberBot circuit board, which is about the size of the one we're going to use (this one is based on the Arduino platform, but ours isn't very different). This is the front of the board, hanging from the bottom of the blimp:


And here's the NorthStar receiver that I've placed on the back to get a sense of scale:


You'll note a challenge that we'll have to overcome. NorthStar 1.0 (and one of options on NorthStar 2.0) is based around the idea that IR transmitters would beam spots on the ceiling and the receiver would be placed to look up and navigate from those. But our blimps are going to be used in gymnasiums and other large rooms where the ceiling is too far away to see. So we'll want to navigate based on direct line of sight from the transmitters.

So where should we mount the receiver? If we mount it facing down, it will lose sight of the beacons when it's close to the ground. Facing to any side means that it won't be able to see any beacons not on that side. We could use two receivers, one on each side and hope that one's always in sight of a beacon, but this introduces complicated hand-off problems as the blimp rotates.

Roomba, the robot vacuum cleaner from iRobot, uses a similar system to get back to its charging base, but rather than spots on the ceiling or trying to keep facing an IR beacon, it uses a cone-shaped mirror that bounces IR from any angle down to a horizontal ring of IR sensors:


What if we mounted one of these on top of the NorthStar reciever and then placed the package horizontally below the blimp? We'd use two direct IR beacons in the room, rather than projecting spots on the ceiling (that just means taking the diffusing lenses off the IR transmitters).

We'll have to play with the system a bit to see if that works, but for now that's the plan. BTW, with a third IR transmitter, it's possible to get altitude, too, but the math on that is kinda gnarly, so we're using an ultrasonic sensor firing down for now.
Read more…
3D Robotics
(part one of the interview is here. A followup to the post below, with some more specific advice, is here.)

Before we drill down with Curt on the specifics on one way to use FlightGear for simulation (synthetic GPS generation), I should start with a little primer on the hardware side of simulation.

The first thing we'll need is a way to connect a PC's serial port (or, if you're like me and only have a laptop with no serial hardware, a virtual serial port via USB) to your autopilot. In the example in the next post, we're going just be doing GPS simulation, basically tricking your autopilot into thinking that it's flying around when in fact it's right on your desk.

There are two ways to do make this connection with your PC, where the simulator will be running: in software or in hardware. The first is the easiest but requires you to change your autopilot code every time you do a simulation. The second is a simple plug replacement for your GPS module, but requires some one-time wiring.

The hardware approach:

If you're using a desktop PC with a real serial port, just cut the end off of a 9-pin serial cable and do one of the following, depending on your setup:
  • If you're plugging in your GPS with a standard three-pin RC servo plug, just connect the serial cable's pin 3 to the white wire of the servo plug. (See DB9 in this schematic to know which is pin 3)
  • If you're using a standard GPS connector, such as the little white one in the EM406, the serial cable's pin 3 should go to the connector's pin 4 (Tx), as discussed in this post.
  • If you want to go straight to your breadboard, just cut the end off of a 9-pin serial cable and solder pins 2,3 and 4 to a header strip (break off three pins from a breakaway strip). We're actually going to use only one of those pins--the Tx signal on pin 3--but you may want the others for other simulations later. Stick that headers strip on some unused part of your breadboard and connect it with a wire to whatever Basic Stamp pin you're using for GPS in.
If you're connecting to your PC with a USB cable, as I do, you'll need to buy a USB-to-serial converter such as this one ($20), Plug it into your breadboard and connect V+ and ground as shown in this tutorial. Connect pin 2 to whatever CPU pin your GPS Tx pin is normally connected to. If you're working on a development board, this is probably the easiest approach. It's what I'm doing. [UPDATE: here's a How-To for my particular setup]

The software approach:

At this point you're probably probably saying "But my autopilot is already connected to my computer! Why do it again?" Well, yes, but it isn't connected to your PC on the same wires as your GPS. If you to want have a simple plug trade between your GPS and your simulator, you'll have to use the hardware approach above. But if you're willing to tweak your autopilot code when you're running simulations, you can skip all of that and edit a single line, changing the pin on which your autopilot looks for GPS data.

In our Basic Stamp examples, you'd go to the constant declaration part at the top of the code and edit "Sio PIN 15" (or whatever you've set as your GPS-in pin) to "Sio PIN 16". That's because "16" is the Basic Stamp code for the dedicated serial-input pin (SIN, physical pin 2), which is normally used by the Stamp Editor during the download process. That will be different on different processors and programming languages, so revise accordingly if you're using something else. Just make sure you're telling the autopilot to look for GPS on the same pin it normally uses to communicate with the PC.

And now to simulation...


Now you can use the same cable and hardware setup you normally use, but the autopilot will look to the PC for GPS NEMA sentences. And those will now be generated not by a real GPS module, but by a simulator.

So now we can turn to the next part of the Curt Olson interview, which will talk about how to get that simulator running.

Read more…
3D Robotics
I'm delighted to start our interview series with Curt Olson, the creator of the cool open source FlightGear flight simulator and a UAV engineer in real life. When I started researching how to use simulators (GPS and IMU) to ground-test UAVs (aka "hardware-in-the-loop" simulation), I found that Curt had done some of the most interesting work in this area. And because here at DIY Drones ignorance isn't a bug, it's a feature (learning in public helps others learn along with us), I thought I'd go straight to the source to find out how it's done.

We had a long email interview, so I'm going to break it into several parts. First an introduction and overview of the sort of UAV simulation you can do with FlightGear:

Q: Can you tell us a bit about yourself? We know you're the creator of FlightGear, but would love to hear a bit more about your day job and UAV projects. Also family, hobbies, odd things that nobody knows about you ;-)

Currently I'm employed in the Mechanical Engineering Department of the University of Minnesota, however that is only 50% time right now and I've been told that as of June, the department will be unable to renew my contract. <insert plug for Curt looking for possible employment or uav consulting projects> :-)

My primary focus at the university has been engineering support for a very advanced driving simulator:

I've been involved in a couple UAV projects with the U of MN aero department. Tons of pictures and information here and here.

I am also involved in a UAV project with ATI, sponsored by NOAA. This has been a lot of fun because it involves designing a new airframe from the ground up, quite a few unique operational challenges (launch from a small boat, recovery in the ocean) and [twist my arm] a couple trips to HI.

Family: I'm married, two daughters, age 4 + 7 (by the time the readers read this anyway.) And 2 dogs. Before kids came along I enjoyed skijoring here in MN with our 2 huskies:

Hobbies: RC model airplanes since high school, small plastic and balsa models before then. I also play a little soccer in an old guys league in the winter and still try to chase around the young guys on my summer team.

Odd things? I was a big Duke's of Hazzard (tv show) fan in high school. My latest FlightGear invention is a tribute to simpler times ...think my sense of humor is optimized to entertain myself and probably not many others. :-)

Q: What are the range of options in UAV simulation/testing? We know about basic GPS NEMA sentence generation, but what else is possible, from course drawing to IMU simulation?

I think if you are careful and clever, you can test just about every element of your system on the ground. That's actually not a bad idea because it's often hard to trouble shoot something that's a couple hundred yards away that you don't have physical access to and have no way to look inside and poke around.

Q: What can FlightGear offer as a simulation engine? Advantages over other options, such as dedicated GPS sims or MSFT Flight Simulator plug-ins? Special suitability to amateur UAVs, etc.

FlightGear can be useful in a number of ways throughout the lifespan of a UAS project. Such as:
  • You can do a physics (aerodynamics) model of your model in flightgear and use that to do handling qualities test and/or turn the flight control gains.
  • You can create a 3d model of your UAV with animated control surfaces for the purposes of visualizing your flight and analyzing the aircraft performance. There's certain things where you need to stare at tables of numbers and graphs, but for some things, seeing what the thing does in 3d and in real time can be helpful. Certain things that just aren't right can jump out at you visually where as you might miss it if you just looked at plots or raw numbers.
  • It's possible to leverage FlightGear as a component for hardware in the loop testing. FlightGear can model the aircraft dyanmics, send "fake" sensor data to your flight computer, and accept control inputs from your flight computer.
  • It's possible to test your flight control system or higher level control strategies purely within flightgear before implementing them on an embedded flight computer.
  • As you begin the flight testing phase, you can feed your flight test data back into the aerodynamics model and make the simulation that much more accurate and useful.
  • Operationally, you can use FlightGear to replay/visualize past flights. You can feed the telemetry data to FlightGear in real time for a live 3d "synthetic" view of the world. You can overlay an advanced hud, steam gauges, or glass cockpit type displays. [example shown below of real video next to synthetic FlightGear view of the same flight]

  • If you have net access, you can connect the slaved copy of FlightGear to the FlightGear multiplayer system and then anyone in the world with a web browser could track your UAV live using our multiplayer map system (google maps based.)
  • You could also have multiple UAV's participating in the FG multiplayer (MP) system and you would be able to see each other in your respective synthetic displays.
[Next: a drill-down on getting FlightGear to drive an IMU]

Read more…
3D Robotics

Best way to simulate a UAV flight?

While it's cold and windy outside, the best way to test our UAVs may be simulations. As best as I can tell there are at least two ways to do this, but I can't say I really understand either of them well enough to implement:

  1. Generate synthetic GPS data with your PC's serial port, see what the autopilot does as a result. Here's a GPS NEMA sentence generator. The fake course data can be plotted in Google Earth. This is just for navigation, of course.
  2. Generate synthetic GPS and IMU sensor reading, and display the autopilot responses in a flight simulator. Curtis Olson, the creator of FlightGear (open source flight simulator) does that with his UAVs. No idea how. Impressive, though...

Has anyone tried either of these? Something else? Can you tell us how to do it?


Read more…
3D Robotics

Making a UAV fail-safe

A big part of the DIY Drones credo is keeping it safe, and by that I don't just mean adhering to FAA regs and staying away from built-up areas, but also keeping it safe for your expensive UAV! The truth, as we all know, is that computers crash and that aircraft flown by computers crash harder ;-)

The aim of a good UAV is to have a fall-back system by which a human pilot can take over if (when!) the autopilot goes funky. There are at least three ways to do this, all of which we feature in one or more of the autopilots here:

  1. Completely separate the navigation and stabilization systems and ensure that the stabilization system can be overridden by manual RC control. That's what we do in GeoCrawlers 1, 2 and 3, all of which use the FMA Co-Pilot for stabalization (it controls the ailerons and elevator, leaving just the rudder for the autopilot and navigation). If the autopilot crashes, you can still fly the plane with ailerons and elevator alone, something we end up doing all too often! (The FMA system always allows for manual input)
  2. Mechanically separate the autopilot and RC control systems. In the case of the Lego UAV ("GeoCrawler1"), the Lego Mindstorm system moves the whole rudder servo back and forth, but the RC system can always turn the rudder servo arm, allowing it to override the autopilot if need be.
  3. Install a stand-alone "MUX" or servo multiplexer, that allows the operator to switch control from the RC system to the autopilot and back again with the gear switch on the transmitter, even if the autopilot catastrophically fails. As far as I know, there's only one commercially available one of these out there, and that one, by Reactive Technologies (shown), is not cheap ($99). Still, if you install one and give it an independent power supply, there should be no reason why you can't regain control of your plane no matter how wonky your onboard computer has gone.
What you should probably not do is exactly what we do (temporarily) with the Basic Stamp autopilot (GeoCrawler3), which passes RC signals through the Stamp chip and synthetically recreates them on the other side for the servos. If that program has a bug or the chip otherwise freezes, you've basically lost your rudder and elevator, which could make keeping the plane in the air difficult indeed. You'll still have control of the ailerons and throttle, but good luck getting the plane down in one piece if your program decides to crash with the rudder and elevator at full deflection.

So the Basic Stamp UAV project might be a good place for a MUX. Anybody know of a cheaper one? (This guy is looking for one, too)

Read more…
Developer

First Arduino IMU test

I'm trying to develop an IMU for my Arduino-based UAVS (heli) project. In my first test I just used a three-axis accelerometer, but it didn't work because the motor vibration generated too much noise. I then tried it just with gyros, but of course the the gyros drifted. So I learned through experience what everyone already knows: that the onlyway to make a good IMU is by mixing accelerometers with gyros andKalman Filters, which is eventually what I did ;-)

I ran a test to see how my IMU is responding, and I made a line chart to see the results. The test consists of rotating the device to 70 degrees and then shake it, to see how the filters reduce the "noise". It looks pretty good:


The samples was taken every 20 miliseconds, the blue line is just accelerometer and the red line is Accelerometer+Gyros+KalmanFilters.

Source code here: ArduIMU Beta1

Special thanks to Tom Pycke



Read more…
3D Robotics
A mindblowing report from Hackaday:

"The 24th annual Chaos Communications Congress in Berlin is already off to a great start. The first talk we attended was [Antoine Drouin] and [Martin Müller] presenting Paparazzi - The Free Autopilot. Paparazzi is an open source hardware and software project for building autonomous unmanned aerial vehicles. The main hardware board has an ARM processor and GPS. It uses inertial and infrared sensors to determine orientation and altitude. The four infrared thermopiles measure the air temperature. The ground is warmer than the sky and if you compare the temperature in the direction of each wing tip your can tell what angle the airplane is at. It's really that simple.

They did a pretty amazing live demo. Using the network connection they controlled a UAV flying in France and another in Germany. Both planes were streaming live video from belly mounted cameras. One relaying through a home DSL connection and the other through a UMTS cellphone. They were able to change way-points on the fly and issue flight pattern commands. There is a ground crew at each location with a security pilot that will switch the controls to manual if things get out of hand."


The slides, which are a must read, are here. The video of the presentation is here.

I'm giving a DIY Drones talk/demo at Etech in March with blimp UAVs. But it will be hard to top this.
Read more…
3D Robotics
Jordi has opened my eyes to the Arduino platform, which is being described as a "Basic Stamp killer". Is it a good candidate for autopilots? Well, let's look closer. Arduino is an open source embedded processor platform, based on the ATMega168 CPU, which has more memory than the Stamp and is a lot cheaper. There's proper development software available and SparkFun has a full line of dev boards and other accessories. Its programming language looks like C but should be easy enough to learn for people who know Basic. It started as an Italian project (it's named after an Italian king) and still has a European flavor, so that may explain why we in the US don't know it well. But Jordi, in Mexico, had done some very interesting work in exploring its potential as an autopilot platform. His main project is the "Arducopter" (shown at right), which has resulted in some very nice code, such as this navigation routine. In his comments, Jordi (BTW, he's just 21) described some of the cool things he's doing with it, which I'll simply quote with links here: "This is my first test with Boarduino (a breadboardable version of Arduino) controlling servos and using an accelerometer from a Nintendo Wii. Right now I'm using Gyros and Kalman filters. I even wrote code to read PMM signals, the GPS is finished and working pretty well, the IMU is in beta, and I'm developing an altimeter using I2C technology and high quality pressure sensors." Here are some links he provided: I'm intrigued. I don't see anything here we can't do with Basic Stamps with a little fiddling, but I have to admit that certain projects look like they would be easier with the Arduino, mostly thanks to its greater memory and full range of variable types, including floating point. Anybody else looking seriously at Arduino?
Read more…
3D Robotics

Basic Stamp autopilot tutorial, part 5

In my last tutorial, I showed you how to upgrade from the 12-sat Parallax GPS module to the much better (and cheaper) 20-sat EM406. Unfortunately, my instructions weren't compete and I've spent a frustrating week actually trying to get it to work properly. I've now diagnosed the problem and fixed it, so this tutorial will help you avoid the problems I ran into.

Just to remind you, the main difference between the Parallax GPS board and the EM406 is that the EM406 is a bare module that you've got to wire properly onto your Basic stamp dev board, and the EM406 outputs raw NEMA "sentences" that you need to parse rather than specific requested data fields as is the case of the Parallax board (trust me, the slight hassle of working with the 406 is more than made up for by its great performance).

In my instructions last time, I advised you to follow this tutorial that was in Servo Magazine this month. Unfortunately it has a serious error in the diagram of the EM406 pins--it actually has the Rx and Tx pins reversed. This took me days to work out, and eventually I went to the manufacturer's datasheet for the EM406 to get it right.

The only three wires you need are 1, 2 and 4. That's ground, V+ and Tx. (pin 5 is another ground, which I've also connected to pin 1 in the picture, but I don't think that's actually necessary).

If you click on the photo above, you can see that I've soldered those wires onto a header strip, plugged it into the Basic Stamp dev board's breadboard, and connected the Ground, V+ and Tx wires. I connected the Tx wire (the white one in the picture) to the Basic Stamp's pin 9, although you can choose any unused pin you want.

Here's the Basic Stamp code that will test the setup above. As always, change it if you're using a different pin for the GPS Tx or if you're using a Stamp other than the BS2p (by modifying the "GPSBD CON 500" as instructed by the comments on that line)

The EM406 should get a sat lock (LED starts blinking) in less than a minute, indoors or out. When you run the above code, your debug screen on your PC should start showing NEMA sentences with lat, lon, and a few other data points. You can then decide what data fields you want in your autopilot and write your code accordingly, or incorporate a full GPS parser as discussed in this series.

Previous posts in this series:

Tutorial 1 -- Servos
Tutorial 2 -- Reading the Rx
Tutorial 3 -- Adding GPS
Tutorial 4 -- Upgrading your GPS
Read more…
3D Robotics
By popular demand, I've added two pages on this site, linked to from the text box on the front page
Given the constratints of the social networking platform we're on, the way these are set up is as blog posts that can be edited at will. Ultimately, I'll allow you to edit these pages yourself, wiki-style, but for now if you'd like your project to be included, please leave a comment on the relevent page or private message me (envelope icon above) and I'll include the links. No doubt I've missed a bunch of projects. Let me know what you'd like me to add.
Read more…