Jesper Andersen's Posts (19)

Sort by


As you may have read from my previous blogs - I’m experimenting with an autonomous pilot system that is able to guide several vehicles in a coordinated fashion. Each vehicle is outfitted with a transmitter that sends telemetry information back to the main computer, and the main computer sends back guiding commands.

The telemetry is based on 3DR serial link radios (915 mHZ version). I did however find that the normal dipole antenna was heavy and was often “in the way” of propellers, airflow and just a general nuisance.

I discovered that Molex was producing an antenna for the same frequency band, that was quite different from the normal dipole and decided to try it out.

The antenna weighs less than 1 gram and has a self adhesive side which make it perfect for mounting it directly on to the fuselage or the wing - a short wire then connects to the radio through a cmx connector.


The advantage is that you can easily place the antenna somewhere on the body or wing and because of the light weight, it will not upset the balance of the plane nor create any drag.


Testing it out

I decided to mount the antenna on the wing of my trusted Z-84 also known as UTV-3 - It has become the workhorse of many experiments and has the most flying hours.


I flew in increasing distance and height and monitored the signal strength and telemetry errors.

This is by no means a thorough scientific test and comparison to the standard dipole antenna, but I experienced very little error (<1%) and I flew as high and far as the law allows.

I definitely will use this kind of antenna on my next builds because of their low weight and ease to stick on the fuselage or wing.


Read more…

3689648881?profile=originalI've been working on a universal pod, to carry additional equipment on drones. While I was playing around with my 3D program of choice (Shade 3D) I got the brilliant idea to add a set of small wings to my pod.


The idea is to create some additional lift and control to the combined drone+pod (the drone is a delta wing).


But does it make any sense? I know very little about aero dynamics, but thought it might make the delta wing carry a bigger load (the Pod), if the load had a set of wings of it's own.


The Pod could even be dropped for a glide landing when the mission was accomplished.

Question is - does it make sense and how should can the wing design be optimised placement and shape wise? Anyone?

Read more…


I’ve been experimenting with a small analysis package I originally called the SIXPack (Scientific X-Pander Pack). The idea is to have a module you can slap on to (almost) any drone. The module contains a microcontroller and storage device, and can house one or several sensors ie: gas, light, air sensors or cameras.


The module gets it’s power from the drone and is also able to receive telemetry data and merge the data with the sensor data, providing a full set of telemetry and sensory data which is stored to a sdcard (or sent back to planet earth through radio).

Early version

Yesterday - I flew a kind of mock up version, let’s call it the Ugly Betty version (on the following pictures you’ll see why).


The electronics is contained in a standard mains junction box, which is kind of nice because of it’s IP56 water resistant seals but a nightmare aerodynamic wise. The particular box contained the mentioned microcontroller and storage device together with a Methane gas sensor. Power and telemetry was supplied from the plane.


A short testflight revealed that everything worked nicely, and that some interesting readings were made and merged with the IMU telemetry (GPS, altitude, etc….). Below graph is just a plot of the flight - more flights are needed to evaluate how good the measurements really are.


It also felt like flying with the handbrakes on.


Is it a bird? A plane? No it's a plane (or bird) with SensePod!

Luckily I’m so fortunate to not only have access to a 3D printer, but also a guy who really knows how to get the best out of a 3D printer. So I started up my good old Shade 3D software and started to design a nice aerodynamic enclosure for the SIXPack.


The SensePod (as I call it) is assembled from two almost equal halves. The bottom part can be replaced with different versions for different sensor configurations, and it also features a plug connector for external power and telemetry.


The pod will have ample of space to piggyback a microcontroller, storage device along with some sensors etc...

I also considered adding wings to it to compensate a bit for the extra weight it induces, but I'm not an expert in aerodynamics so I thought it best to leave that idea for now. The thought however, that you would also be able to drop off the pod in mid air and let it glide back to earth is enticing :-)

Next up is printing the Pod in a miniature size to make sure everything is modelled correctly, then the real thing is printed! I'll post an update and first flight postmortem.

Read more…

And they say smoking is bad - part 2

And they say smoking is bad - Part 2


Last time I looked at a set of 18650 batteries that had some nice specs, but in the end proved to give me less flying time than my half capacity LI-PO.

I got a lot of comments - thank you so much - and several people whispered NCR18650B as an almost mythical 18650 sized battery.

I went ahead and order two with solder tips - this caused by the uncertainty of the effects of using a cheap battery holder of dubious quality.

The NCR Panasonics state a 3400 mAh capacity at nominal discharge. The cell has a fuse that melts at 10A discharge for a longer period but is otherwise unprotected.


I quickly soldered the two batteries in series, and taped everything together in order to protect the solder tips and wires. End result was not the prettiest, but it worked and the pack is easy to charge.


The other day was a nice calm flying day, and I took my trusty delta wing to the skies with the stopwatch ticking. First up was my good old 1300 mAh turnigy lipo, which clocked just below 18 minutes at cruise speed (approx 45-50 % throttle).


Next up was the NCR18650B pack, which is approx. 15-20 grams heavier thus needing a bit more throttle to get airborne and cruise along.


I flew both packs until the ESC would automatically cut out to prevent battery damage.


The NCR18650B gave me 28 minutes and 10 seconds. So around 10 minutes extra time!

The flight itself wasn't particular interesting, flying slow in big circles - but the extra time was very noticeable.


This all sounds good, but there are some things to consider


  • Flying faster means a higher discharge rate, which also decrease the real capacity of the batteries - an effect that is much less seen in LiPO batteries. Flying both packs at full throttle would probably get more similar flight times.

  • The extra weight means that my delta wing is almost maxed out payload wize.

  • If you can carry several packs in parallel, it’s probably better because the load on the individual battery is less.

So I will probably fly the NCR18650B if I want to fly slow, for a longer period with little payload, and LiPO if I want fast and/or more payload.

That said, the NCR’s were definitely good batteries that seem to keep up to their specs unlike so many 18650 type batteries these days.... you get what you pay for :-)

Read more…

And they say smoking is bad


Technology is often driven by request and demand, and while the drone community is growing rapidly, the number of people trying to quit smoking is probably a bit bigger. Along comes the E-Cigarette and with it, comes a demand for high current high capacity batteries - at low prices. I recently stumbled across such batteries (EFest 18650) that promised a 35A constant discharge and a capacity of 2500 mAh for as little as 6$ a piece in a shop that sells E-cigarettes and E-juice.

That sounded good, and while I don't smoke, I thought it would be fun to put two in my delta wing and see what it did to my flight time compared to my normal 2 cell 1300mAH lipo.

I charged the batteries using my lipo balancer and made sure they charged to the nominal 4.2 volts. I then constructed a small pack using a cheap holder for such batteries. The connectors and wires in the holder were, let's just say not that good, so I replaced them with something thicker to make sure no voltage drop was induced by poor quality.


I normally fly the delta with a small 1300 mAh lipo 2 cell, and at low cruising speed i usually get approx. 18-20 minutes of flight time.

I started out with my normal lipo, and got 18 minutes and 50 seconds worth of flight.

Then I exchanged the pack for the EFest 18650 battery pack. With a promised 2500 mAh capacity, I expected to get at least 50% longer flight time (accounting for the approx 30 grams extra weight of these batteries).


Sadly, I only got 15 minutes and 16 seconds! Thats worse than the half capacity 1300 mAh lipo!

I'm not the first to try out 18650 type batteries - made a nice test some time back - so why was I not getting same results?

After a recharge, I discharged the batteries and measured the capacity - I got 2040 mAh when discharging at 0.5 C, not 2500 as advertised. Further more it turned out that due to the low internal resistance of the batteries, the more power you draw, the more effect is lost as heat in the batteries - so at and amp draw of approximately 3-4, you end up only getting 1000 mAh of power - hence actually getting shorter flight times than a much smaller lipo.

I guess the lesson learned here is that these types of batteries are best in parallel, where the amp draw is distributed between several batteries AND you get what you pay for :-) For now I thing I'll stick to my lipos...

Most posts here are success stories and it's always very inspiring to read what others have accomplished - I would certainly love some feedback from others who've had experiences - good or bad - in using these types of batteries.

Read more…

I'd thought I'd share some concept images of the new test vehicle that is currently under way.


It's designated UTV-4 (Universal Test Vehicle number 4) and will be fitted with all the hardware required by the AutoPilot system described in my earlier blogs:

Part 1 that introduced to overall concept of a decentralised autopilot, where the autopilot can reside on a remote server and/or guide several drones in a coordinated effort.

Part 2 about the overall software architecture and hardware interface.

Part 3 A detailed look into some of the software features.

Part 4 Flight testing the small vehicle UTV-3 to prove the basic concept in real life and explaining the wonders of PID.

Part 5 A description of the complete electronics that goes into the final vehicle including gas sensing hardware.

Part 6 An overall description of the system fitted in the UTV-1

Building new vehicles is fun and while we flight test the UTV1, I've started to outline the hardware for the next drone. A lot of knowledge has already been gained developing the UTV1, and hardware has also developed over the last two years. The drone will therefore feature better and faster hardware and will be even more universal than the UTV1.


The specs will look something like this:

* On board computer - Banana PI (Dual Core) or Raspberry PI 2 (Quad Core)

* Two cameras (one high def and one NIR)

* Serial link radio



* Dual antenna RC receiver

* Instrument/Sensor bay

* Onboard display

* Flight time 1.5-2 hours

The front dome will house two cameras, and possibly some more sensors, and a display that will show information about the vehicle prior to launch (State Machine state, diagnostics, mission content etc..). As I recently have gained access to a 3D printer, some decs, mounts etc,. will be 3D printed as well as some hardware mocs. This will ease the actual layout before getting the real hardware. A lot of effort is placed on accessibility and ease of use.

A third camera is placed inside the body and will help to determine the stability and amount of vibration during different maneuvers.

The drone will feature either a wifi link or a physical lan port for upgrading software and downloading data (should you not choose to send them during flight)


The plane will be equipped with a serial link radio transceiver which will allow the craft to act either as a master or slave drone in a larger setup while also connecting to the ground using a GPRS modem.


The drone will also have an plug and play instrument bay (Nice rhyme!), where different sensors can be plugged in and feed data to the onboard computer for further processing or transmission.

A big thanks to this great community and all the inspiring people that put effort and time in reading/writing/commenting and posting wonderful and inspirational products and ideas.

Read more…

An Autopilot in the clouds - part 6.5

An Autopilot in the clouds - part 6.


The loooong wait...

Weather here in Denmark is optimal right now, if you’re either a fish or create 110% waterproof hardware. So while waiting to get a decent chance for the UTV1 to get it’s maiden flight, I thought I’d shortly describe a little tool I made to ease mission planning and execution.

The UTV1 has the ability to send and receive telemetry through the GSM net, which make it very appealing to spend a little time on making a graphical front end that can reside on a laptop, that allows you to see the usual stuff the drone GUI’s show (position, status etc), and also allows you to control the craft to some extend.

I’ve build my little GUI with inspiration from the control center of the Apollo era, with a simple button based interface, were buttons light up on error events, state changes etc.

The GUI also allows you to directly chat with the drone (video is included). I’ve kept the voice feedback, because this has proven to be very useful when your eyes is on the sky and not the screen.



1 - A text area that shows the raw messages that is being sent from the vehicle - great for nerdy debugging.

2 - A map area that shows the last know location of the drone.

3 - A text input field that allows you to chat with the drone for mission planning etc..

4 - A representation of the autopilot statemachine - the row of buttons light up as the vehicle makes it through it’s mission.

5 - Button that lights up when connection is established

6 - Button that indicates if manual RC-controller flight has been activated

7 - Button that shows if the statemachine has been brought into terminate mode (which just means that a pleasent glide landing is being attempted), can be manually activated or activated by the range guard mechanism.

8 - A number of warning lights that lights up in various scenarios (pretty self-explaining)


Telemetry can be updated 4 times/second but I keep it to a minimum 1/s for the time being.

Below you'll se a short video of how a set of waypoints can be entered by using the built in chat client - in effect you are actually talking to the drone :) This is a rudimentary first version, with fixed altitudes etc..


Read more…

An Autopilot in the clouds - part 6


This article is part six in a series, describing some of the inner workings of an autopilot system - even though the description is specific for this particular system, a lot of principles are generic and can be used to describe a variety of auto pilot systems.

To summarise the previous posts we had:

Part 1 that introduced to overall concept of a decentralised autopilot, where the autopilot can reside on a remote server and/or guide several drones in a coordinated effort.

Part 2 about the overall software architecture and hardware interface.

Part 3 A detailed look into some of the software features.

Part 4 Flight testing the small vehicle UTV-3 to prove the basic concept in real life and explaining the wonders of PID.

Part 5 A description of the complete electronics that goes into the final vehicle including gas sensing hardware.

Today we take a look at the complete “guts” of the next test vehicle, how it’s set up for test and the fitting inside the actual frame. Also a short video from the 100% automated checkout and pre-flight sequence.



Last time we took a look at the analysis unit that hosts the gas sensors, arduino, IMU etc.. One thing I didn’t mention was the analog accelerometer (it arrived after the previous post).


The ADXL335 is outputting an analog voltage for each of the three axises. The idea is to measure two things;

  1. Big and slow vibrations caused by either a feedback in the IMU causing wobble or a bad airframe design or misaligned cog

  2. Small and fast vibrations caused by unbalanced props, loose motor mount etc….


To measure vibrations we have to sample the output of the ADXL335, x number of times each second and compare each sample to the previous one. Problem is that one x value is good for measuring case 2, and another for case 1.

After a lot of testing it seemed that case 2 gives good results when sampled at 380 hz, while case 1 is better at around 20 hz.

The Arduino that interfaces with the ADXL335 is therefore programmed to take a subset of samples from the 380 hz sample set, equivalent to roughly 20 hz. This way, we get two vibration numbers for each axis, one for small & fast vibrations and one for bigger but slower vibrations. The numbers are sent together with the values form the gas-sensors to the autopilot, and the watchdog will warn if either vibration measurement is above a certain threshold.

Testing rig

Before fitting the electronics onto the airframe, a test setup is made, where everything is connected and tested. This is much easier when the electronics is not yet placed in the airframe and the modularity of the electronics make the transition easy. I did however place the boxes in the airframe prior to testing, to make sure we had enough room for interconnecting as well as getting a good COG.


This is the complete electronics block diagram, that shows how everything is interconnected.

Power consumption of the electronics, without servos and motor was measured to roughly 0.35 amps, which means that the computer and sensory would be able to run a full 14 hours on a 5000 mAh battery -  not bad.


Software packages

We will test two different software packages, one being the autopilot software described and used in the previous posts as well as a combination of “The SIXPack” and ALF.

The SIXPack (Scientific Instruments Xpander) is a universal software to gather data from various sensors (cameras, gas sensors etc.), process the data and send it in near realtime through a GPRS connection. Together with the ALF ( the system will be able to capture footage from the onboard cameras, analyse them and send the result on to a computer on the ground - all in near real time and the reciever can be located anywhere as long as it has internet connection! Pretty Sweet!



There are several things we'd like to check while the hardware is mounted on the test rig. Most important is ofcourse that the systems that need to talk to eachother, are in fact doing so. This includes the sensory platform, that through I2C sends data from gas sensors and the analog accelerometer to the onboard computer. The IMU also sends a lot of sensor data the same way, through a serial connection. After some hazzle with level shifters, I got all data fed into the onboard computer, and also got the GPRS modem to send pictures and telemetry to a client on my laptop (no wires).

I'd also took some measurements of how hard the main computer CPU was pushed while just receiving and processing telemetry and sensory data. After approx. 30 seconds the average CPU load was on 19% which gives us plenty of headroom for image processing or guiding other drones from the main drone.


With everything working on the test bed, it's time to put the two boxes (sensory+IMU and onboard computer) back on the airframe. With the boxes in place, the battery can be mounted on top of them and placed for optimum CG.


Total weight is 1.33 kilo grams, which should be fine for this airframe.

More Testing

So with a lot of work put iton diagnostics, the final checkout is pretty easy to perform. If the craft completes the automatic checkout and preflight check, everything should be good for the first real flight. Everything is controlled by the statemachine and only a 100% pass will lead to the craft going to READY state.

The preflight check makes sure that:

1. Commands can be sent from the autopilot software to the IMU and acknowledged by the IMU

2. The telemetry link between IMU and onboard computer performs perfectly (in terms of datapackets and signal fidelity)

3. The primary flight sensors deliver stable numbers

4. Servos and engine is responding

Below is a video that show the startup and run of the preflight, checkout sequence. The laptop client receives overall info from the onboard computer and reads them aloud. This is also convenient when flying, because you don't need to look at a screen while the bird is in the air.

The preflight sequence was performed 10 times in a row without a glitch. The Autopilot client can be run on any java capable device and receives telemetry data and statuses from the onboard computer. 

So were does that leave us?

Right, so a summary of the vehicle looks like this;


UTV 1 - Flying wing with additional sensory system and onboard computer for auto piloting and data analysis.

Sensory consists of two on board cameras (one normal and one NIR), two gas sensors (Monooxide and Methane) and a analog vibration sensor.

Communication and data down/up link through a GPRS modem using the XMPP protocol (means that control and monitoring can be done anywhere, as long as there is an internet connection).

Range safety and vehicle performance auto monitoring with capability to shut vehicle down or land if problems occur.

Flight testing the ALF system for automatic landing spot detection.

Manual flight through 2.4 gHz radio.

All in all, pretty cool :-)

Next time

Next time we’ll take the UTV-1 for it’s first test flight - pretty exciting.

Read more…

An Autopilot in the clouds - part 5

An Autopilot in the clouds - part 5


This article is part five in a series, describing some of the inner workings of an autopilot system - even though the description is specific for this particular system, a lot of principles are generic and can be used to describe a variety of auto pilot systems.

Today we’ll look at building the electronics for the new UAV (the UTV-1) incorporating some lessons learned from the smaller UTV-3.



If you’ve read the previous posts you’ll know that I’ve been working on an Autopilot with an architecture that allows the platform to run several instances of the autopilot and separate the autopilot from the airframe. This allows rapid testing and the potential to have central computer system that coordinates several drones in a join effort to solve a task.

So far the tests have been conducted on a relative small airframe, which only allowed the basic hardware and the autopilot computer to be ground based. This allowed a lot of concepts to be tested and paved the way for the next test vehicle.

The airframe will be the Zeta Science Phantom 61 - It’s very familiar with the z-84 which was used for the small test vehicle. Now we need to figure out what to stuff in the airframe and how.



The current test vehicle was build only to test the basic concepts and software. The new vehicle will host some sensors and a computer that can analyse the collected data in real time. To experiment with this aspect, the airframe will be fitted with two downward facing cameras - one regular and one with the filter removed (a simple NIR camera) and two gas sensors. The place I do testing is close to some waste dumps etc, so it would be interesting to analyse the surroundings and map gas concentration to a map.


Lessons learned from UTV-3

Flying up here can be a wet experience and even the most gentle landing will get your plane wet. I had a few incidents that was probably caused by electronics meeting water. In this vehicle we want to bring a computing platform (an adapteva parallella or a raspberry pi), and other delicate equipment that should be protected from the elements. That means that some electronics will be fitted inside an enclosure, that can shield it somewhat from the elements. The “somewhat” implies that we don’t want 100% water tight shielding since some sensors need to be exposed to the outside pressure and temperature - namely the IMU with the barometer sensor and the two gas sensors used to analyse the air quality.

HW Sensory

Computer platform - Raspberry Pi B+ or Adapteva Parallella

Cameras - Raspberry PI Camera and Public Labs Infragram camera

Gas sensors - Methane and Carbon Monoxide sensors MQ7 & MQ4

Arduino nano 32u for analog input and processing

2x UBEC - 3 amps for power to the systems

IMU Crius SE 2.5


Even though we work with two enclosures, we still want the modularity of the complete package, which allows us to test most of the setup outside the actual airframe. Since there are some links between the two enclosures, both are mounted on the same base plate which allows easy interconnection and testing (a test rig)

Each enclosure has it’s own UBEC that supplies power.


The IMU/Sensors platform

One enclosure houses all sensors (not counting the two cameras) including the two gas sensors. It is seen to that the enclosure is naturally ventilated, to make sure the outside air can get in.


Everything inside is marked with labels, and fastened with vibration dampening pads.


The two gas sensors have built in heaters that needs to be on for at least 60 seconds before good measurements can be made. Sensors is connected to a 32u based Arduino with 10 bit resolution.

3689625185?profile=originalThe final enclosure will also host an analog accelerometer that will measure the vibration of the enclosure and the craft, which can be monitored from the ground. The watchdog system will also monitor for large vibrations through this sensor and will warn if the ride starts getting bumpy. 

A little field test

So test out the enclosure with the sensors, I attached the enclosure to my car and drove out to the dirt-processing plant close to the place I conduct my testing. It was very pleasing to see some quite big “peaks” while passing a certain place. Below we see the two peaks (a bit left from the middle) as a result as passing a certain location, turn around and drive past it again.


Without calibrating the sensors we can’t say exactly how much Monoxide and Methane we measure, just that we see more some places than others. For now thats okay, it proves that the gas sensing unit works and is able to measure differences in gas levels with a strong airflow going through the enclosure.


Next time

Next post will be about the complete test rig, that in the end will be fitted in the craft. With all the features of the electronics some extensive testing is needed. Also, power measurements, stability and so forth is essential to uncover before placing the electronics on the airframe. 


Read more…

An Autopilot in the clouds - part 4


This article is part four in a series, describing some of the inner workings of an autopilot system - even though the description is specific for this particular system, a lot of principles are generic and can be used to describe a variety of auto pilot systems.

Today we take a look at the overall system and some initial flight testing.


Current system overview

To give you a better insight into the inner workings of the autopilot, I’ll throw in a little blog diagram of some of the building blocks of this autopilot system.

More blocks are bound to be added as I expand the system.


Coms - Handles all internal communication between modules as well as external communication ie. user input.

Logging and speech (talkback is a nice way to get crucial info without looking at a screen) is also handled by coms.

Telemetry - Handles input from sensors, gps, user input etc. It also handles communication back to the craft (guidance commands etc.)

Watchdog - monitors the whole system and warns if any anomalies occur during flight.

Rangeguard - makes sure that the drone does not get too far or too high during the flight.

State machine - handles the overall logic of the autopilot

Vessel - The craft type specific part (copter, fixed wing etc.) and the library of maneuvers that the craft is able to perform.


User input is currently handled by an external auto sequencer (XAS), which is programmed to do a series of commands at specific times. The XAS waits for the state machine to perform a preflight check, and when everything is ready, the XAS issues an auto-takeoff command. Further down the development line, app and chat integrations will be made to control the autopilot.


Flight testing

So it’s time to take the system for a spin in a closed loop test, were the autopilot actually gets to control the drone and not just tell a human how it would like the drone to fly. The idea is to do some simple stuff that exercise most of the system.

Especially the state machine, the sensor/telemetry system and maneuver systems are of interest to the first test, so I decided to configure the statemachine for a simple 1 waypoint maneuver, where the drone take off automatically, navigates to the one waypoint and does and autoland afterwards.

Auto takeoff utilizes the gyro/accelerometer to sense when the flying wing is thrown, and will upon this, start the engine and begin to climb to approx. 10 meters altitude, after which it finishes the auto takeoff maneuver.

Waypoint navigation uses the GPS and magnetometer to determine heading and distance to the waypoint.

Auto land uses the altimeter and gyros to determine when to cut off the engine and when the craft is actually down.

Approximately 30 parameters are recorded and logged for offline evaluation (height, gyro/accel output etc.). All this data is analysed through charts etc., to determine the how's and why's of each test. 



Because we now start to fly the system in a closed loop, where the autopilot actually gets to control engine and servos, we need a safety arrangement that makes sure that we can resume control if something goes fishy. The IMU residing on the drone has therefore been programmed to stop listening for autopilot commands if it receives a hard right input from a normal RC transmitter. Once it has received a hard right, it will no longer process any information from the autopilot and the drone (fixed wing) will be flying 100% by the input of the RC transmitter and will thus act like a normal RC controlled airplane.

Furthermore, a automated checkout procedure built into the state machine makes sure to measure radio connection quality, sensor flux, gps lock etc, before allowing the plane to leave the ground.

Lastly, the autopilot features speech of important parameters, warnings etc,. which is a great help, since you don't have to look at a screen while the plane is in the air.



As mentioned, the main objectives are to exercise and analyse the backbone systems, such as the state machine, telemetry system and maneuvers that subscribe to data from sensors and use them during their operation.

Auto take off worked quite good, each throw was correctly detected and a suitable delay before turning the motor on was present, to allow the throwing hand to get out of the way. Once the height of 10 meters were achieved, the state machine switched to waypoint navigation. I noticed that the plane took some shallow turns, while climbing which suggests that a heading hold functionality would probably be a good thing. This could be implemented by detecting the heading of the launching throw, and then try to keep the plane on that path during the climb.

Auto land was dead simple, worked simply by setting throttle low, wait until being 2 meters above ground and then turn the engine off altogether. This seemed to work perfectly, although a heading hold would probably be beneficial here to.

The interesting part was the waypoint navigation. I thought it fun to try to do a very simple algorithm to determine if the plane should turn left, right or head straight to reach the waypoint. I kind of new from the start that it probably required a PID loop, but I wanted to see what would happen if I used a simple "if I'm left of the waypoint turn right with a fixed value" kind of logic.

The chart below shows the almost endless circles the plane took to finally reach the waypoint. It seemed like a lot of overshooting took place and the plane generally flew eagle style, with a lot of huge turns, slowly getting nearer the waypoint - The journey took 32 seconds - not good :-)



The above graphic just shows why PID control was invented. In short, a PID controller regulates a system to achieve an optimum counting in past deviations, current deviation and future predicted deviations. So instead of the simple compare based logic, the waypoint navigation was "upgraded" with a PID loop that took the current heading and desired heading as input and outputted a correction to the roll of the plane. After some tuning while running some previously recorded flight data, it was time to repeat the test with PID control. The chart below shows how the plane (once the autopilot went into navigation mode) makes a steep turn and heads quite straight for the waypoint (the wind was quite strong at the time). The journey this time took roughly 5 seconds.


The test was repeated several times with the same result, so hurray for PID control! Most autopilots use PID to control height, roll etc., so I'm not breaking new ground here, but the two images nicely shows the effect of an efficient PID controller.

Next steps

It's certainly doable to separate the autopilot from the actual aircraft (which made developing and onsite tweaking so much easier) and basic autopilot features have been achieved. There are a lot of practical limitations in the separation, but also solutions to most. A lot have been learned from the small plane (designated UTV-3 - Universal Test Vehicle) and as the flying season is winding down it's time to start building a new and larger version which will facilitate a host of sensors and allow further testing of ie: the ALF system as well as letting one drone control other drones in a coordinated effort. The next plane will be based on the Phantom FX61 airframe which is a bigger brother of the UTV-3.

Read more…

An Autopilot in the clouds - part 3


This article is part three in a series, describing some of the inner workings of an autopilot system - even though the description is specific for this particular system, a lot of principles are generic and can be used to describe a variety of auto pilot systems.

Today we take a look at the maneuver system.



For each type of vessel, there is a library of maneuvers. Some are common for all vessels and some are specific for a specific type of vessel or configuration.

A maneuver can either be of a simple nature, like turning right until another maneuver is issued by the autopilot, or a more complex maneuver that has some logic embedded (timing etc.) or is changing flight path dynamically based on data from sensors etc…

A maneuver can be synchronous or asynchronous, meaning that it either does something quickly (ie: issue a roll correction command) then terminates, or it does something that takes a while (ie: navigates the drone to a waypoint) which it does in it’s own thread and reports back to the main system when it’s finished. Asynchronous maneuvers has a built-in kill switch, which can be activated by the autopilot if certain conditions arise - ie. the rangeguard system triggers, the statemachine wants to change state etc., or the maneuver simply takes too long to execute. Maneuvers can also be temporarily paused.


Example - The simple maneuver

Suppose the drone is in the air and you’ve just activated the freeflight mode (which means that the drone is controlled manually by the operator). If you issue a turn left command, the system issues a turn left maneuver. Since the maneuver only does one thing, send a left flap down, right flap up command to wing servos, the maneuver is considered simple and only takes a few milliseconds to perform. The drone starts to turn and will continue to do so until the operator issues another command or the auto-part of the autopilot takes over for some reason.


The advanced maneuver

Once it’s time to get the drone safely back to earth (and we are talking a flying wing here), the autoland maneuver is executed. This maneuver could be a simple turn-off-the-engine-and-lets-hope-for-the-best type of thing, but I’ve made it a bit more dynamic.

The strategy is to let the maneuver store the planes heading and try to keep that heading all the way until the plane has landed. Furthermore we want to keep the engine running (a little bit) until we are a few meters over ground to have some thrust for steering.

The maneuver is asynchronous, because we want it to terminate when the plane is safely on the ground and the state machine can transcend to the safe state.

The maneuver starts by subscribing to the telemetry coming from the plane sensors. This ensures that each time the telemetry is updated, the autoland maneuver is fed the new data.

The maneuver then stores the current heading and locks in on it, by applying correctional turns if the plane starts to point off that heading with more than 10 degrees.

At the same time it tells the engine to spin slowly (approx. 15%).

Then it waits for the altimeter to tell it that it has sunk below 2 meters altitude. When it does, it turns off the engine completely but keeps applying correctional turns if the plane starts to head outside it’s chosen heading.

The maneuver also listens to the gyros and accelerometers. When no forward motion is detected for a period of time (a few seconds), the maneuver considers it’s job done, and issues a maneuver done signal back to the main system, which will trigger the state machine to change to safe mode.


One curiosity is that maneuvers can utilize other maneuvers to get their job done, ie. the general "flying through waypoints" maneuver, uses the simpler "fly to one waypoint" maneuver to get to each waypoint.


Next time

Initial flight testing, log analysing and optimization

Previous posts

Read more…

An AutoPilot in the Clouds - part 2


In part 2 of this blog, I take closer look at the communication protocol, range guard and state machine of the autopilot. As with part 1, this blog post is for those who think developing drone software is interesting and would like to know some of the thoughts that have gone into making this particular system.

Some may ask why one would go through the trouble of taking the autopilot off the airframe, when it’s been sitting there happily for so long. The simple is answer is, interconnecting is more fun!

A long time ago, we had personal computers but no internet to share data through. Data had to be transported on floppy disks and magnetic tape. Facebook, twitter, Online multiplayer gaming, email and a zillion other things were not practical without the connecting abilities of the internet.

In the old days, the first power plants were only serving a few local businesses with no way to handle sudden break downs etc. Today we have connected state and even countries through our power lines to be able to sell electricity or cope with breakdowns on the grid.

In short - we tend to want to connect solitary systems, to make them more usable.

Thats why I find this project fun. By making the autopilot a central entity, that is connected to multiple drones, the possibilities of what these drones can achieve together are endless. See it as a low-cost, hobby experiment - exploring what such systems could do for us in the future or as an alternate to the APM type of autopilot.

Another advantage is the possibility of running the autopilot of a laptop, where modifications are easy to do during flight tests and no expensive hardware is damaged, should the craft to self destruct.

Finally, this system is versatile in the way that it connects to its surroundings, so if you’d like to have the autopilot stay in the airframe, it simply stays there and connects through the serial port… 

The protocol

The system relies heavily on constant communication with the drone. The drone sends a lot of telemetry metric back to the autopilot, which the autopilot in turn interprets and react to, sending back correctional data to make the drone achieve it’s mission. Stability is handled locally on the drone by a modified multiwii and it is therefor only correctional data such as throttle, pitch and roll that is sent back to the drone. 

As we want to be able to control as many drones as possible, with only one radio link, the communication protocol has to be as efficient as possible, meaning, don’t include data that is not needed and send it as compact as possible. 

I’ve not optimised this to the fullest but sending data to the drone only takes up 8 bytes - these are:

[Start data byte] [Drone id byte] [ROLL] [PITCH] [THROTTLE] [YAW] [Checksum] [Data done byte]

Start data byte simply indicates that a transmission of data is about to begin.

Drone id byte indicates which drone this data is intended for.

Roll, Pitch, Throttle and yaw are the native values that multiwii uses normally as input from an RC Radio and is normally a number ranging from 1000 to 2000 (PWM Cycles). A byte can only describe the range from 0 to 255 or -128 to 127. The values are therefor divided by 10 before sending, and multiplied by 10 upon reception. We loose a little precision only being able to send values in the increment of 10, but thats acceptable.

Checksum is a control byte that is calculated from the previous data - as this checksum has to be computed fast on the receiving embedded system, I choose to use a simple xor checksum. This will still be a pretty good error prevention mechanism, and should faulty values come through, the autopilot will correct them on upon the next telemetry update.

Data done byte signals the end of the data packet.

Should the checksum or number of bytes between start byte and end byte be wrong, the transmission is discarded. 


As with the usav (android based autopilot), the system features a range guard system, that acts independently of all other systems. If the drone gets higher or further away than whats allowed, it can either land the craft or turn it around regardless of what the craft is doing. Internally this is handled by the state machine, that ensures that the proper transitions are made from what ever state the craft was in when the range guard was activated.

The State machine

The state machine control the basic actions of the system, and makes sure that things happen in the correct order. While testing auto takeoff and landing, the state machine is configured to simply go through the preflight check, wait for the auto takeoff command, execute the auto takeoff and immediately issue the auto land manoeuvre once auto takeoff has been carried out.

In blocks, the configuration looks like this, where non transparent arrows shows automatic transitions and transparent represents user determined state changes.


A state consists of a number of things. 

Condition dependencies - A state is not allowed until some conditions are fulfilled. ie. preflight state is not allowed until we have solid telemetry (gps fix etc..)

Allowed pre-states. A state can only occur if the previous state is on the list of allowed states. This makes sure that you cannot land before you’ve actually taken off. Or that you cannot go into preflight whilst in the air.

State precode - Any code that will prepare the system/craft for the coming state.

State main code - The code that is to be executed once the state is effective.

State post code - Any code that should be executed when leaving the state

Auto advance to next state - Indicates if the state should automatically advance to another state once the state main code has finished.

Below is a block example of the preflight state.


The state machine is easily configurable which means that it will grow to something like this, once more testing has been done. The state machine will most likely contain a free flight state, where the user is able to control the craft directly using an app.


As you see, the range guard system is fully independent state, which will be triggered if the craft get too far away, and will disable any current activity and switch to auto land.

Next time

The system relies on predefined maneuvers. The maneuvers can either be very simple (turn right) or pull on telemetry data, include logic etc, as is the case with the auto takeoff maneuver. I'll show some examples.

Also, some graphs and logs from flite testing.

Read more…

An AutoPilot in the Clouds - part 1


In our RC world, autopilots are typically some kind of software/hardware combination that is physically located on the craft flying/driving/sailing etc.. This has some disadvantages since autopilots are expensive and singular, meaning that they only know them selves and have no interaction capabilities with other autopilots.

As an example, a friend once asked me what it would take to make a drone that could analyse a large field of crops taking samples of soil were the vegetation was less eager to grow. My reply was “Two drones!” - one fixed wing to fly over the field, doing a flir/nir analysis - and one copter type to fly out, land, take samples and return them.

The operation had to be coordinated somehow - so the copter would know were to land etc…

In short - some problems are not solved using one single type of drone, but rather a combination of drones with different capabilities.

In the dungeons of my cellar, pilled with winter clothes, umbrellas, broken china and old lamps, I’ve been working on a universal autopilot system, that has a central computer system that is capable of guiding several drones and coordinate their activities. The main software can be executed on a very small computer ie. a raspberry pi or similar, or a laptop. The drones run a modified version of multiwii 2.2., that sends very compact telemetry data several times each second as well as being able to receive commands from the autopilot on the ground.

The idea is to keep the autopilot on the ground (or on a mother drone), and keep track, issuing flight directions using short and efficient telemetry data between drone and main computer. Several crafts can be controlled by the software at once, making multi drone missions possible and coordinating that drones do not collide.

Each drone uses a XRF ( radio module to send back telemetry as well as receive orders on direction, speed, altitude etc… The XRF modules are great, very easy to use and setup and have a range up to 1.8 miles. Most serial transceivers are however usable.  


Each drone uses a multiwii board for stabilisation and serial communication, and a gps module so the central computer knows were the drones are at all times.


The autopilot software (running on the central computer) is controlled by a state machine, meaning that a system is making sure that everything happens in the correct order. For instance, that auto land is only possible after takeoff. Or that takeoff is not possible if the preflight sequence was unsuccessful or GPS lock has not been achieved. The state machine also makes sure that transitions between states are carried out correctly - ie. going from autopilot to manual flight makes the state machine automatically pause the autopilot function and set the craft on a straight path before control is set to manual.

The system is easy to extend for different craft types as well as different hardware.

The guidance part is based on the USAV project described some decades ago (so it seems) here.

Currently I have a simple system working, with one fixed wing, called the UTV-3, based on the Z84 from Zeta Science.


I’ve implemented some simple maneuvers as auto takeoff, auto land and simple waypoint navigation. Simple stuff that I use to test the system with. Once everything is stable, I’ll be implementing the multi drone hub, needed to coordinate the works of several drones. Although I work a lot in my dungeon, I’m also lucky to live close to a field for RC enthusiasts were practical testing can be done. Videos to come.

Read more…

I3689591829?profile=originalmagine your drone suddenly running low on power - a long way from home, doing it's auto waypoint flying thing.

You have to decide how, where and when to set it down very quickly!

I've been playing with my homebrew Automatic Landing Finder (ALF) that uses a photo to determine where a suitable landing spot would be. The algorithm takes things such as size, visibility etc. into account when determining what spot to pick. The system picks the best candidate among possibly thousands.

The idea is to have a camera pointing downward taking snapshots at regular intervals. Should things go haywire, the system will analyse the last frame where the craft was till stable, and try to find the best landing spot.

Eventually this system will be installed on the next version of the USAV (Unmanned Social Autonomous Vehicle) but for now it's being tested with "fake" images from google earth.

If you had to land anywhere on the picture below, where would it be?


Feeding the image above though the ALF reveals the places that ALF considers as being a more or less good spot to land. The resulting image is greyscale because this is a part of the process. Yellow squares are suitable candidates, and the red square is the chosen optimum landing site. Observe how dark spots (where poor visibility prevents a proper condition estimation and vegitated areas are (mostly) avoided. Did you chose the same spot?


How fast did you determine a proper place to land? The computer took roughly 842 ms to find a good spot.

Below is the same result with all candidates removed, only showing the optimum landing site.


I'd love to get some feedback on what you flying guys out there think is a good emergency landing spot in general as well as what is a bad one! :)

Best regards

Jesper Andersen

Read more…

Processing power for Drones!

3689503702?profile=originalOk - so the critical sub systems for keeping balance, headings etc, are probably best served by a RISC type processor like the ARM and ATMega processors. But on top of that, you could put something that allows an easy access to image processing, communication, logging etc.. In the USAV project I use an Android phone (ARM 600 mHz) to handle the more high level stuff - such as navigation, telemetry, Twitter integration etc... Even though this system is remarkable fast when doing just that (a distance + heading calculation takes roughly 2 milliseconds), this system will be affected once starting to do image processing etc...

I just stumbled across this little thing, that surely would be more powerful and extendable.It has an 1.6 GHz atom processor and uses only roughly 4 watts. You can run any operating system on it and connect usb devices.

The size is only 3.50 x 2.36 x 0.63 inches (89 x 60 x 16mm) which would make it fit in many UAV's.

Read more…

USAV - Telemetry system

USAV is the Unmanned Social Arial Vehicle - A drone with an android based auto pilot system that communicates through Google talk and Twitter. Due to the weather and the fact that an H-frame is being built for the vehicle, no flying is possible here. But I'm making good use of the wait :)

So snow is still falling here in the north and it's not really flying weather.

So I decided to make good of the waiting and create a telemetry unit for the USAV.

I guess telemetry is what Tabasco sauce is for omelets. So of course the USAV should have this kind of thing as well. Since the USAV uses the internet for communication, getting telemetry from the vehicle is a bit tricky. 

Normally, the XMPP protocol is used to talk to the craft and get info back, but this isn't really suitable for near to realtime telemetry. In stead i decided to use standard http, but getting data from the onboard cell phone is not easy since the phone as such has no public static IP address when using various cell towers to communicate.

So I created a middleware solution, where the USAV broadcasts it's telemetry to a webservice, and where clients can then get the telemetry from using another webservice. The middleware component has to have a static IP, but offers some nice features in return.

The middleware dosen't just broadcast the telemety to anyone who's interested - it also stores each telemetry packet in a database for later analysis. This way, a telemetry client is not limited to only getting the current telemetry, but also fetch older data either as a single snapshot or a series for playback of events.

The system looks somewhat like this


In the video below you'll see a short demonstration using a very primitive telemetry client - what you see is;

1. The USAV becomes online on the Google talk network

2. The telemetry client is started

3. Telemetry broadcasting is started on the USAV

4. Telemetry is received

5. GPS lock is achieved

6. The craft is turned 360 degrees

The telemetry unit onboard the craft is very loosely coupled, meaning that any data can be transmitted as telemetry data, using a single line of code for each and does not have to be initialized from the beginning.  

The rate of updates from the craft is configurable and currently set to once every second.

Instead of using this ugly command line based client (seen in the video), one could build a nice app, web or native gui to display the data in a nice way.

I really do hope I get to fly with this thing soon, so I can post some flight videos with the whole setup running.

Until then - all the best.


Read more…

USAV - An architectural update + video


Here's an update on the USAV project I started some months back. This post is mostly about the architecture and features of the current version as well as a video of a static test.

The USAV is a UAV with an integration to some of the social networks like twitter and Google talk. It is also an android powered guidance system currently implemented in a standard 450 flame wheel quad copter.

So it's been a while since the last post about the USAV - although slowly, things has progressed mostly on the software side although the quad itself has been upgraded with a new frame and the KK2.0 from Hobby King (and downgraded again - longer story).

Denmark is a cold and wet kind of place especially during winter and is certainly not ideal for anything that's electric, (a bit) fragile and is preferred to be operated outside. So my last flight with the USAV was in September where the craft suffered a near death experience (one engine and IOIO board did experience the real thing). From then till now, I've spent my time on rebuilding and adding features to the software.....oh, and I also became a father :)

Today I would like to give some insight in the project as well as a little video of how you interact with the craft.

The software

As mentioned, the brains of the craft is implemented on the on-board Android based cell phone. The software is written as a java based app. The main building stones of the app is as shown above.

As you can see, the typical smart phones features such as internet connection, GPS, compass, accelerometers and camera are all being utilized giving the "brain" a lot of interesting possibilities during flight.

The system allows you to manually control flight by chatting with the craft during flight, as well as set a route through waypoints and let the auto pilot, in turn, guide the craft to the individual waypoints. Each waypoint can have some extra characteristic, such as;



follow (A special type of way-point that moves ie. another smartphone broadcasting it's GPS coordinates)

Way-points can be created either by specifying GPS-coordinates or by address. A return point is automatically added to the flight manifest to ensure that the craft returns to the starting point.

The auto pilot part is as of now very simple - utilizing the GPS and compass to determine where the craft is in relation to the current waypoint, and executing commands to get closer.


Sometimes, when you develop software, you tend to model the real world as much as possible. Sometimes this makes perfect sense, but in the terms of the auto pilot, I choose a different approach.

A real world pilot is very specialized in his knowledge of the craft has is piloting, i.e. an aircraft pilot may not now how to pilot a helicopter or in principle a car for that matter.

This software pilot is created (like many other auto pilots) so that it doesn't know any specifics about the craft it pilots and can therefore be used with all kinds of crafts (cars, boats, planes and helicopters). The autopilot relies on a stack of modules that can translate the pilots direction commands to something specific for the particular craft as shown below.


The clever thing about this being that new crafts and control boards can be implemented quickly through java interfaces.

Current features highlight

Okay so where are we right now in all this - to put it short, I have a lot of code and very few flying hours (minutes actually), so there are a lot of untested features. So far tests has been mostly focused about communicating with the craft and doing some basic maneuvers. Since I am new to the wonderful drone world, this has been a slow and very google/diydrone driven development cycle and I'm sure there are a lot of obstacles yet to be solved.

A selection of the current features I have implemented are;


* Speech Synthesis feedback

* Way-points are added using gtalk

* Street address to GPS coordinates


* Range safety (abort mission if flight path becomes too long)

* Timer safety (mission abort after xx seconds)

* Auto return to starting point


* Soft throttle (changes throttle gradually - saves on props and motors)

* Calculate distance traveled / remaining in real time

* Auto land using accelerometers

* Logging of flight data

* Kind of accurate altitude calculation using GPS 


* A large set of gtalk commands

* Twitter integration (text and camera pictures)

Command set

Using gTalk, these are the current commands implemented


Below is a video demonstrating some basic features in a static test. A bit dull I'm afraid, but at least you'll get an idea on how you interface with the USAV.

When the weather allows I will take the craft for a flight and will surely need your advice on tuning the controller for the most stable and efficient flight.Currently i'm using xcopter firmware 1.1 - Should I change to KapteinKuk 4.7?

Best regards

-Jesper A

Read more…



First of all – thank you for the warm welcome into the DIY drone family. I am very grateful that so many people spent some time on reading my first blog post about the USAV.

So the idea of what exactly the USAV is, is starting to form, and while there are a lot of practical stuff to work on (changing frame, gluing nuts and bolts, deciding which configuration to fly), the more abstract thoughts about what the USAV should do has to be developed alongside.

One of its features should be the possibility to give it some kind of task, and let the brains of the USAV solve the task. This may sound very AI, but even AI is in its heart a bunch of formulas and computations.

The USAV could in some regards be seen as an autonomous satellite, with a bunch of sensors on it, which it should utilize to solve the task at hand. We have camera, magnetometer, accelerometers, Wi-Fi radio, GSM/GPRS radio, microphone etc… and could be equipped with even more sensors. Also remember that internet is available during the operation.

As with all life (even Artificial Life), life must have a purpose.

So today I would like to ask this wonderful community for ideas of what kind of tasks the USAV could be asked to perform. Any suggestions are welcome, fun, useful, technically demanding….

To start you off – here are some of my own ideas

  • Find all public WiFi’s in the area. Requires: Built in Wi-Fi to located public accessible Wi-Fi and measure the signal strength
  • Measure GSM signal strength in the area, find dead spots. Requires: Built in gsm radio
  • Find a red car. Requires: Built in camera + pattern recognition
  • Follow me – The USAV follows the persons every move. Requires: Person to broadcast his position to the USAV
  • Fly to (some address). Requires: Functionality to resolve addresses to gps coordinates, built-in camera + some intelligent way of determining a good landing spot.
  • Map area – use onboard camera to create a detailed aerial image. Requires: Camera and some image stitching functionality

Please leave a comment describing

1)      The task

2)      Special requirements (sensors, setup etc..)

Read more…

Introducing the USAV.


(Disclaimer - I'm not an english speaking person from birth, but thru endless re-runs of predator, Total Recall and Terminator - I have perfected my english skills.... not)

On the outer skin it's a standard 450 Quadcopter - but the brains is a new approach to the standard UAV.

Most amateur-UAV today either use a remote controlling system, an ardu-pilot or a combination of the two.

While the ardurino platform is very well made and has the most impressive features, I wanted to create a different approach. 

Being a software designer I decided to let the brains of my UAV to be based on the well known Android platform - in other words program a smartphone to fly the UAV and utilize all the nifty features and gadgetry these phones are born with (GPS, accelerometers, compass, camera, gprs radio etc..).


You may argue that it's an expensive solution, but obtaining an old Sony Ericsson X10 mini was easy and very affordable (around 25$).

So having turned my back to the wonderful world of Ardurino, I instead turned to Sparkfun's IOIO Board


The particular upsides of this board are amongst others;

* Works with android 1.5 and up

* Has adjustable charging for the phone

* Does not require any USB shield, can talk directly to a connected phone

* Phone connects through standard USB or bluetooth

* A very simple to use API makes it easy for a JAVA guy (like me) to talk to the IOIO board

The IOIO board can talk and recieve digital/analog/pwm etc., thus making it relatively easy to "fake" an RC receiver.

I've installed my homebrew app/autopilot on the android phone, just like a regular android app.

Granted - this is my very first QUAD, UAV and even hobby vehicle - although I make it sound easy, I actually had to go through all the tragedies, let downs and broken props as everyone else in here. Also, this particular craft is not built for beauty, and I've probably made a lot of rookie mistakes when using heavy plexi glass plates, 4 mm nuts etc..

So the navigation system of my UAV consists of an Android phone, the IOIO board and a KK controlboard.


As of now, the android utilizes the GPS for location, compass for heading and GPRS for communication and accelerometers for figuring out if a catastrophe is near (ie. if the phone is flipped, then we are likely to crash and the engines are cut off). All data is logged to the sdcard in the phone. The balancing of the quad is handled by the HK KK board.

SO! What makes the S of the USAV I hear you ask - well - my approach to flying might be somewhat different than other people flying UAV's - My vision goes toward the extreme of autonomous. I like systems that are able to make their own choises and who interact with the outside world through "normal" channels like everyone else. 

Enter the S for social - this quad can fly to various predefined waypoints just like the ardupilot, but it is also more autonomous in that you are able to give it a task, and the Quad will find a way to solve it by itself (do the dishes is NOT an option - I tried it and the wife haven't forgiven me for breaking the 14 century Ming Vases we dine and drink from). More on this subject will come....

Moreover - instead of fiddling with code and editors to tell the quad what you want - you simply use a chat client (GTalk) when you want something done or you need a status - in other words, the quad is my new best chat buddy! I will uncover more of this in an upcoming post.

The merits of the Quad is twitted by the Quad instantaneously during flight, so every masochist can follow the Quad's doings in realtime.

I believe this is the very first UAV with social skills!

So where are we currently at? (we being me and my new best flying friend). Well, we are still learning basic maneuvers - lift off, landing, turning etc.. Last flight really revealed that I have to look into adjusting the pots on the KK board to decrease a heavy oscillation that in the end rattled the KK board off it's platform resulting in a nasty crash and broken props - last chat message from the QUAD was a simple "ouch!" 

There's also problems with the compass being disturbed by the engines and finally a downward pointing sonar would be nice to make the landings more smooth.

Following the nature of blogs I will post updates and comments are very welcome :-)

Thank you for tuning in on this very first post, from a new user of this awsome site.

Fly with whatever force you believe in



Read more…