Autonomous Flying using the RealSense T265 Tracking Camera in GPS Denied Environment

jnIcA3G8M0jyU9jvhvkSxOk1acsBJKWox_OyAhGxwRbQWbxAEG2sLuEnOIk_YmKK0Bj1EX38P177JGRC1rJ18Dut9aDmmZ-As_xx1yKWLKgLauTvRpqBjfuv9JVBu6eFZ1P3_avI (1289×720)

As part of a Google Summer Of Code (GSOC). I have the privilege to Mentor a talented PhD Student of the Nanyang Technological University of Singapore, named Thien Nguyen.

Since the beginning of this project , Thien has delivered a series of Labs to serve not only as milestones for the project but also a step-by-step guideline for anyone who wishes to learn about using the power of computer vision for autonomous robot to follow. The labs include:

  1. Lab 1: Indoor non-GPS flight using AprilTags (ROS 2-based)
  2. Lab 2: Getting started with the Intel Realsense T265 on Rasberry Pi using librealsense and ROS
  3. Lab 3: Indoor non-GPS flight using Intel T265 (ROS-based)
  4. Lab 4: Autonomous indoor non-GPS flight using Intel T265 (ROS-based).
  5. Lab 5: MAVLink bridge between Intel T265 and ArduPilot (non-ROS).
  6. Lab 6: Calibration and camera orientation for vision positioning with Intel T265.

I invite you to read about this series of well detailed  experimentations and  instructions on how you can implement the  RealSense T265 tracking camera system :

https://discuss.ardupilot.org/t/gsoc-2019-integration-of-ardupilot-...

Here is a video showing autonomous indoor flight using the system in ROS-Mavros environment  (this is part of Lab 4):

Lab 5 shows how to fly using a Python Scrit sending MavLink Message Vision_Position_Estimate directly to Flight Controller.

You can read the underlying principles of how to incorporate a VIO tracking camera with ArduPilot using Python and without ROS. After installing necessary packages, configuring FCU params, the vehicle can integrate the tracking data and perform precise navigation in GPS-less environment. Pose confidence level is also available for viewing directly on GCS to quickly analyse the performance of the tracking camera.

Thanks to Thien for this amazing project, experiments can now be carried on the T265 with different Flight Controllers and stacks compatible with the Vision_Position_Estimate  MavLink Message.

Views: 1086


3D Robotics
Comment by Chris Anderson on June 25, 2019 at 5:56pm

Fantastic work! I did the same thing with a T265 and a Rover (it will be racing at this weekend's DIY Robocars race in Oakland), but this is next-level stuff

Comment by Mateusz Sadowski on June 26, 2019 at 1:29am

WOW! Great job! I strongly believe that combining ROS and ArduPilot can result in really powerful applications.

Are there any plans to host those tutorials outside of ArduPilot forums? I'll be sharing the information about the labs in the next issue of Weekly Robotics, hope you don't mind extra traffic!

Comment by Mateusz Sadowski on June 26, 2019 at 1:34am

Also @Patrick: when you copied the lab description by mistake you also copied the number of clicks, making it a bit confusing about "ROS 2" based labs (I got quite excited for a second!)

Comment by Patrick Poirier on June 26, 2019 at 2:58am

Thanks Chris

Mateusz no problem, you can spread the news :-)

Most of the details are on the series of blogs published on ArduPilot.org , you can get the essence directly on Thien's github:

https://github.com/hoangthien94/vision_to_mavros

Comment by d j on June 26, 2019 at 7:21am

Thank you Patrick

from

https://www.intelrealsense.com/tracking-camera-t265/

"

For a limited time, get the power of an Intel® RealSense™ Depth Camera D435 and Tracking Camera T265 bundled together for one great price, and get started with your next project today.

https://discuss.ardupilot.org/t/gsoc-2019-integration-of-ardupilot-...

https://github.com/IntelRealSense/librealsense/blob/master/doc/samp...

I am building lidar based, forward looking 2D radar array for my car to offer obstacle avoidance, alerting .

Surround lidar is low cost device today.

What matters is to turn it into forward only looking device ( 100 points x 100 points depth live array)

Laser lidar is not heavy in computer's power since number of points analyzed is reduced 100+ times

(1Mpx x 1 Mpx video vs. 100 px x 100 px laser lidar array radar

Since my car is noit intended for autonomous mode, I don't need 2D or 3D SLAM support

My friends develops laser based 3D scanners

and I am just testing every laser scanner, laser projector for outdoor mode

since most of laser distance meters fails to work in direct daily sun

\so is not fit for outdoor obstacle avoidance activities

Let me know what should work better  for me:

 depth camera  vs. laser lidar (2D radar)  obstacle avoidance outdoor solution ?

Comment by Patrick Poirier on June 26, 2019 at 7:38am

Well, it is a little off-topic, but what you might consider is using the D435 API (or Python Wrapper) to generate the 100 points x 100 points depth live array. I did something similar and it works quite good.

We are looking as well or the official release of the Realsense Combo  (T265 + D435) as a SLAM+Avoidance system.

Comment by d j on June 26, 2019 at 1:55pm

thank you

"We are looking as well or the official release of the Realsense Combo  (T265 + D435) as a SLAM+Avoidance system.

so you are home, read below as official release

from

https://www.intelrealsense.com/tracking-camera-t265/

"

For a limited time, get the power of an Intel® RealSense™ Depth Camera D435 and Tracking Camera T265 bundled together for one great price, and get started with your next project today.

" you might consider is using the D435 API (or Python Wrapper) to generate the 100 points x 100 points depth live array. I did something similar and it works quite good.

how to contact you about the details and success story ?

Comment by d j on June 26, 2019 at 3:17pm

a short question

since the term "Indoor non-GPS flight" gets repeated 3 times,

are you suggesting

Indoor GPS flight to be a valid alternative ?

To my understanding, Indoor GPS is nothing what should be considered as a reliable technology to control  flight of a drone indoor,

so promoting Intel T265 as a viable alternative to highly unreliable technology

may hurt marketing strategy by Intel

Indoor GPS doesn't sound ok from R&D point of view

Comment by Patrick Poirier on June 26, 2019 at 3:58pm

Hello,

As the title says, it is GPS denied environment.

To make it clear and simple,  we are generally referencing to  "non-GPS"  so everyone can understand that the autonomous modes (like Loiter - Guided and Auto) are being processed by a different localisation system.

If you want something more academic: "Implementation of Visual Inertial Stereoscopic Odometry as a 6 Degree Of Freedom Pose Estimator"... ;-)

Please note that the Labs are performed indoor as Thien cannot fly drones outdoor on the campus. I am presently experimenting outdoor, and proceed with caution as the system can degrade rapidly when exposed to bright sunlight.

If you read through the blogs, there are still a coule of roadblocks to make the T265 as a reliable system that can be uses with accuracy on various configurations like looking down at higher altitudes. There are many references to opened PR on RealSense Github and we are progressing as fast as the issues are addressed by Intel.

A final note on the T265-D435 Bundle, as you can read I do have them on hand, but there is NO official release of the SLAM+Avoidance  SOFTWARE system.We are waiting for an official , documented release and hopefully with real 3D mapping.

As you can see here, it is still in development branch and on 2D format : https://github.com/IntelRealSense/realsense-ros/tree/occupancy-mapp...

You can send me a personal message,  please note that most of my free time is dedicated to Mentoring Thien with this interesting project until September.

Regards

Comment by d j on June 27, 2019 at 5:36am

"As the title says, it is GPS denied environment.

but what follows repeated 3 times is

"Indoor non-GPS flight

indoor flight is always non-GPS so

"Indoor non-GPS flight   - stays for tautology

My nice suggestion to improve wording

Triangulation (2D SLAM/ 3D SLAM) based indoor navigation flight

as good and as reliable as in GPS environment

"GPS denied environment.

Indoor environment stays exactly for "GPS denied environment.

BTW

" you might consider is using the D435 API (or Python Wrapper) to generate the 100 points x 100 points depth live array. I did something similar and it works quite good.

Should I process mega pixels subject to blur filter not to loose small sharp objects (like wires, nails) in obstacle avoidance algorithm ?

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service