The video shows localization data collected during a short flight over an IR marker, which is used to estimate the relative location of the copter with respect to the visual landmark. Note that this was a manually flown flight, but the ultimate goal is automation. Detecting visual landmarks/features is a fundamental task in many forms of robot localization and navigation. For example, the Snapdragon Flight includes 4 camera sensors for visual-inertial odometry.

The plot in the video shows the copter’s vision-based position estimation, versus the traditional position estimation. The red data is logged by APM:Copter running on Pixhawk with a 3DR GPS module. The blue data is derived from IR-LOCK sensor data, as it detects a MarkOne Beacon at approximately 50Hz. LidarLite is used for AGL altitude measurements. The presented data looks nice, since it was a fairly tame test. We need to calibrate the lens before we can correctly handle larger pitch/roll angles.

You can think of this as a ‘flipped’ version of the StarGazer indoor robot localization system, where a unique visual landmark is placed on the ceiling. However, the copter localization problem is a bit more tricky, due to the extra degrees of freedom. It can pitch, roll, ascend, etc. So the copter localization estimation also depends on the flight controller’s state estimation. And ideally, all of the data would be fused together.

One of the key advantages of having a uniquely identifiable visual landmark is that it can be used to remove drift in velocity and/or position estimations, which is typically the function of the GPS. This can also be accomplished by developing a local map (i.e., SLAM) …. With the MarkOne Beacon, we can also operate at night, but the video would be even more boring. :) Robust vision performance in variable lighting conditions typically requires some form of IR projection. (see Intel RealSense specs)

Views: 1787

Comment by Jiro Hattori on January 9, 2016 at 1:45pm

Very interesting testing.

Can you estimate altitude instead of position estimation from GPS position data and angle data of IR lock?

Comment by Laser Developer on January 9, 2016 at 1:49pm

@Thomas, I've been meaning to ask why you've chosen this more complicated method - IR beacon on the ground and camera in the air - rather than the other way around with the IR beacon in the air and the camera on the ground. It seems to my simple way of thinking that this removes the degrees of freedom problem and allows for two ground based cameras to give both position and altitude. Of course you would need to feed the position information back to the bird but since you are in close line of sight you might be able to use a simple bluetooth link.

Comment by Thomas Stone on January 9, 2016 at 2:49pm

@Jiro

The standard GPS data would most-likely not be accurate enough for the calculation to work very well. 

@Laser Developer

That is a fair question. The method you describe is essentially the motion capture method. Also, check out PreNav's system

One of the reason I do things the opposite way is because of the sun. I can filter out the sun's reflections, but if the sensor is directed toward the sun, that causes detection problems. And I am aiming for VERY reliable detection. The multi-camera system that you suggest could solve that issue, but that level of complication in the setup is not desirable for some applications (although, it may be perfectly fine for other applications).  

The extra degrees of freedom shouldn't be a problem after the data is properly fused with the other sensor data. Admittedly, the data fusion and filtering is mathematically complicated, but it is what it is. 

Due to the growing processing power, I imagine we will continue to add sensors that are providing redundant information, and the EKF will be actively weighting the data sources to estimate the copter's state. For example, you could have optical flow, GPS, and stereo vision simultaneously influencing the velocity estimate. Then, if somebody turns out the lights, the GPS measurements will be weighted higher, as the other sensor data deviates from the model.... It's going to get complicated no matter what. :) 

Comment by River Bian on January 9, 2016 at 9:02pm
great work! figure out a interesting solution for precision landing and indoor navigation.
Comment by Volta Robots on January 10, 2016 at 3:43am

Thomas #1

Comment by lot on January 10, 2016 at 11:36am

Why not use more than one beacon, with different colors for recognize them, or make a pattern of beacons for estimate altitude or orientation.

Comment by Thomas Stone on January 10, 2016 at 12:41pm

@lot

It would be nice to get an alternative altitude reading, just in case the rangefinder happens to be pointed at un-level terrain. 

Comment by Nick McCarthy on January 13, 2016 at 7:58am

Thomas, this is excellent work and a great example of development to come!


Moderator
Comment by Bill Piedra on January 13, 2016 at 9:56am

Very impressive work Thomas.  Will you be entering the Ford/DJI Developer challenge with this?

Comment by Thomas Stone on January 14, 2016 at 12:44pm

Thanks Nick and Bill. :)

@Bill

They haven't released the Ford/DJI rules package, but I assume that the IR Marker would not be allowed. 

Anyways, it would be easier to accomplish the truck landing with an APM/Ardupilot-based setup (and my IR-LOCK gear). Maybe I should give it a try if I can find some free time. :)  

I can assure you that it will be a challenging controls problem when the truck is moving. A robust system would need to incorporate the motion data from the truck. It would be best if the truck had its own IMU+GPS. 

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service