Copter Localization w/ IR Landmark

The video shows localization data collected during a short flight over an IR marker, which is used to estimate the relative location of the copter with respect to the visual landmark. Note that this was a manually flown flight, but the ultimate goal is automation. Detecting visual landmarks/features is a fundamental task in many forms of robot localization and navigation. For example, the Snapdragon Flight includes 4 camera sensors for visual-inertial odometry.

3689677626?profile=original

3689677544?profile=original

The plot in the video shows the copter’s vision-based position estimation, versus the traditional position estimation. The red data is logged by APM:Copter running on Pixhawk with a 3DR GPS module. The blue data is derived from IR-LOCK sensor data, as it detects a MarkOne Beacon at approximately 50Hz. LidarLite is used for AGL altitude measurements. The presented data looks nice, since it was a fairly tame test. We need to calibrate the lens before we can correctly handle larger pitch/roll angles.

3689677598?profile=original

You can think of this as a ‘flipped’ version of the StarGazer indoor robot localization system, where a unique visual landmark is placed on the ceiling. However, the copter localization problem is a bit more tricky, due to the extra degrees of freedom. It can pitch, roll, ascend, etc. So the copter localization estimation also depends on the flight controller’s state estimation. And ideally, all of the data would be fused together.

3689677665?profile=original

One of the key advantages of having a uniquely identifiable visual landmark is that it can be used to remove drift in velocity and/or position estimations, which is typically the function of the GPS. This can also be accomplished by developing a local map (i.e., SLAM) …. With the MarkOne Beacon, we can also operate at night, but the video would be even more boring. :) Robust vision performance in variable lighting conditions typically requires some form of IR projection. (see Intel RealSense specs)

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • Thanks Thomas, actually I changed the 3.6mm lens with a wider one and maybe I moved a little bit the optical axis. After the calibration I figured 6 deg of misalignment along Y and 0.5 along X. Your setup look great as well. If the camera is used for precise landing, the closed loop control will keep the vehicle over the camera while descending and accurate parameters are not necessary.

    I know there are functions for calibrating cameras using openCV, and maybe I could have a look. I found out this guy here posted some very interesting tools and setup: http://julesthuillier.com/part-ii-get-your-position/

    Part II : Build the Gateway and get your position ! (old)
      You have proudly assembled at least two wireless cameras and you finally want to test them. Good ! Because at the end of this post, your cameras wi…
  • @Tiziano

    8 degrees of error seems extreme, especially near the center axis, and in the vertical direction. I have included (below) an illustration of the error in a recently-shipped unit. Based on some quick measurements, the vertical error is approximately 1 degree ... (also, some error could be attributed to the simplicity of the experiment/measurements)

    The coefficients represent a very simplified lens model, which is sufficient for 'centering' during precision landing. For other applications (e.g., localization), I would recommend a more complex lens model and calibration. The precision landing could possibly benefit from a better calibration, but I suspect that the gains would be minimal in that particular application. 

    3702182186?profile=original

    3702182281?profile=original

  • @Thomas: you know that I've been extensively testing the IR-Lock setup in those months. Recently I have noticed that the camera optical axis could be slightly off center (actually up to 8 deg). That is not a big issue if you are using the camera set vertically for driving the horizontal error to zero: as you keep descending the drone is led to the beacon. I was wondering if you've ever noticed that and how are you estimating the pixel-to-degree coefficient, as the one I calculated is different from yours

  • @John

    Here is a successful test using a barometer. However, it would greatly benefit from a laser rangefinder.... I think that is the first problem you should address in your case. 

  • @John

    If you post your log (for firmware 3.3-and-up), I can check it for you. 

    Are you using a rangefinder for altitude readings, or the standard barometer? The easiest upgrade for precision landing performance is to use a laser rangefinder, so that you have an accurate AGL altitude reading. 

    The existing position controller actually cascades to a velocity controller. The desired position is updated according to the sensor readings, and the velocity controller is the inner loop. 

    Some of our users are experimenting with velocity controllers (w/out position control). I think the greatest benefit is that it is easier to directly tune the velocity controller. 

  • @Thomas, I know you are the main one working on this so I am hoping to pick your brain as i am really interested in the beacon approach to guided landing and very motivated to get it working as best it can. I would like to operate indoors, but for my current plan it is more important for me to get a more precise landing/tracking outdoors. I am getting the platform to track the irlock beacon, but it seems to oscillate a lot over the beacon, getting worse the lower the altitude, as I would expect from the current controller. What do you think about operating outdoor with GPS, but instead of updating the loiter position, use a velocity pd controller to pull the copter into place.

    Have you made any recent improvements to the current precision land controller besides what is currently in the ardupilot repository.

  • @John

    I can't get into all of the solution details, but I can tell you about the difficulties. :)

    In particular, operating without GPS also means that you are operating without velocity estimations. Accelerometers, gyros, and magnetometers are not sufficient to estimate velocity (for an extended period of time). The localization information from IR-LOCK can potentially solve this problem, but it's not super-easy. 

  • @Thomas, I was wondering if you have done any work on getting the IRLock to work without GPS. I would love to use IRLock indoors (and outdoors) and before I reinvent the wheel I was hoping you already had a start, some guidance, or some information on why the gps approach was taken. I was thinking of implementing a simple PD Velocity controller instead of just updating GPS loiter positions. Thanks for any help you can give.

  • Thanks Nick and Bill. :)

    @Bill

    They haven't released the Ford/DJI rules package, but I assume that the IR Marker would not be allowed. 

    Anyways, it would be easier to accomplish the truck landing with an APM/Ardupilot-based setup (and my IR-LOCK gear). Maybe I should give it a try if I can find some free time. :)  

    I can assure you that it will be a challenging controls problem when the truck is moving. A robust system would need to incorporate the motion data from the truck. It would be best if the truck had its own IMU+GPS. 

  • Moderator

    Very impressive work Thomas.  Will you be entering the Ford/DJI Developer challenge with this?

This reply was deleted.