Hello!  I am a developer and interested in Obstacle avoidance.  Could someone direct me to a group or individual working in this field?  Is anyone working with LIDAR, optical sensors, and the like for obstacle avoidance and path planning?  I’d like to help.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Thanks for the the info :)

    Todd Hill said:

    @scott, check out Ardupilot.org, as that is where you will find the lead developers of the project roaming.  Maybe paste your post here, and here.  Those are the perhaps the two biggest developer discussion forums you will find online for open source AUV/UAS.  I think both communities could greatly use, and benefit from your skills/interest.:-) Cheers!

  • Developer

    I suggest joining the conversation at https://gitter.im/ArduPilot/ardupilot/VisionProjects and https://gitter.im/ArduPilot/Research

    There are a number of people and a number of companies active in there.

  • Moderator

    I currently and using a DJI Matrice 100 and Guidance system from DJI, along with their Manifold Computer (Nvidia TK1).  It was provided to me by DJI as part of the DJI Developer Challenge, and I have to say it works very well.   The guidance sensor detects obstacles using a combination of sonar sensors and stereo cameras.  To make it navigate a path around obstacles all of the programming is done with ROS.  The guidance sensors operate as ROS topics.  It is easy to create a point cloud with the stereo cameras.  I used python scripts and some open source C++ code to navigate the path.

  • Todd,

    video camera vs. lidar in object sensing, object avoidance.

    Lidar is generally a single point scanning device.

    In theory, via optics, a larger area can be scanned at once to emulate radar, but you need a much stronger flasher

    and the first signal received comes from the closest obstacle withing scanned area ( sphere).

    So to scan a sphere  (FOV) you need to integrate lidar with gimbal.

    A single video camera offers such functionality at once, generating depth maps on the fly.

    No need for stereo vision (stereoscopy).

  • @scott, check out Ardupilot.org, as that is where you will find the lead developers of the project roaming.  Maybe paste your post here, and here.  Those are the perhaps the two biggest developer discussion forums you will find online for open source AUV/UAS.  I think both communities could greatly use, and benefit from your skills/interest.:-) Cheers!

    ArduPilot Open Source Autopilot
    The most advanced open source autopilot for use by both professionals and hobbyist. Supports multi-copters, planes, rovers, boats, helicopters, ante…
  • Hi Ben,

    I never did get very far with this.  Was looking for a group where I could contribute a bit and learn, didn't find one, then work got pretty busy.

    I had a look at the DJI Guidance SDK website and feel that documentation is lacking.  Not saying it isn't a good platform or capable software, but documentation is important.  There was nothing I saw that indicated there is an easy to use programming interface suitable for a hobbyist or professional who does not spend a lot of time programming.  Not sure what your time line is but seems to me the DJI Guidance SDK route could be a long time before you actually get to take some pictures.

    Regards,

       Scott

    Ben Nyberg said:

    Hi Scott!

    I'm not a software developer but I'm very interested in working on Obstacle avoidance.

    The project I'm working on is surveying for rare plants on vertical cliffs in Hawaii. 

    Ideally, I would be able to set a distance to the surface, which the UAV could then hold as I flew vertically taking photos for photogrammetry software.   

    I've looked into the DJI Guidance SDK with the intention of purchasing the DJI Mavic Pro, but as I mentioned before its a little over my head.

    Any input/help you might have would be greatly appreciated? 

  • Since the targeted surface is soft ( plants are grown) LIDAR can fool you (data missing)

    twin-camera or single camera stereoscopic vision to generate depth maps on-the-fly to detect closest objects is the most advanced technolopgy today to work in DIY drones world.

    You need to scan a sphere with a LIDAR or camera. Camera scans a sphere by default, LIDAR requires moving head to get the same results (FOV).

    3D point cloud generated can be on-the-fly processed (open libraries provided) to extract volume objects based on pattern recognition, 3D objects libraries to generate obstacle free spaces for smart flying.

    You need to install companion computer (Intel or others) to process video frames on-the-fly.

    You can test your obstacle avoidance video system on desktop computer, laptop, tablet at home.

    LIDAR may not be suitable for obstacle avoidance in case of soft objects (plants).

  • Hi Scott!

    I'm not a software developer but I'm very interested in working on Obstacle avoidance.

    The project I'm working on is surveying for rare plants on vertical cliffs in Hawaii. 

    Ideally, I would be able to set a distance to the surface, which the UAV could then hold as I flew vertically taking photos for photogrammetry software.   

    I've looked into the DJI Guidance SDK with the intention of purchasing the DJI Mavic Pro, but as I mentioned before its a little over my head.

    Any input/help you might have would be greatly appreciated? 

  • Sure thing. Tell us more about what you want to do. We're designing a whole range of long range sense-and-avoid LiDARs for for drones.

    • Well… I am a mechanical engineer and longtime C++ developer.  At my job I write code to interact with networked A/D devices to gather vibration signals.  I spend a lot of time writing fault tolerant network code, but once I do receive data I provide the signal processing for my company.  So I’ve a pretty sound proficiency in digital signal processing.  Back in my college days as a research assistant I built a digital holography system and based my master’s thesis on it entitled “Digital Holography and Noise Tolerant Phase Unwrapping”.  The point being that writing code to interact with hardware and process real world data is in my wheelhouse.  Like many people, I have been drawn to the emerging drone revolution and want to participate in its development.  I feel that Autonomy and Obstacle Avoidance is one place where there is room to make a meaningful contribution.  I’m not sure where you’re at in your development but I could see myself working with just the sensing device, such as a scanning LiDAR, to develop code for perhaps building a world model for path planning, or for reactive avoidance. 

      If you have a code base started I’d consider diving into a section needing refinement, or maybe you’re at an earlier stage and considering various techniques to base a code upon, in that case we might talk about a published paper that the group is interested in following.  But to answer your question about what I want to do; I want to use the skills I have to help further development in UAV Autonomy and obstacle avoidance.

This reply was deleted.

Activity

DIY Robocars via Twitter
RT @chr1sa: On May 22, we're returning to in-person AI @DIYRobocar racing at @circuitlaunch. The Amazon @awscloud DeepRacer team will be pr…
20 hours ago
DIY Robocars via Twitter
RT @breadcentric: On my CV: Hobbies: Training bananas to race on tracks #AWSDeepRacer #DeepRacer https://t.co/MKe14hNyux
20 hours ago
DIY Robocars via Twitter
RT @breadcentric: See how the April AWS DeepRacer races have ended and a couple bits of news: https://blog.deepracing.io/2021/05/09/aws-deepracer-league-2021-update-11-end-of-april-special/ #AWSDeepRacer #Machin…
Monday
DIY Robocars via Twitter
RT @sunilmallya: Representation Learning +Instance Transfer to learn new reward functions along with advantage based filtering of new exper…
Monday
DIY Robocars via Twitter
Apr 27
DIY Robocars via Twitter
Apr 27
DIY Robocars via Twitter
RT @f1tenth: Sliding (autonomously) into the weekend like ... 🤖😎 #f1tenth #robots #AutonomousVehicles @OpenRoboticsOrg @NVIDIAEmbedded @Aut…
Apr 25
DIY Robocars via Twitter
RT @chr1sa: One of the problems with autonomous car racing is that watching software drive is not a very exciting spectator sport. To help…
Apr 25
DIY Robocars via Twitter
RT @SmallpixelCar: Replaced AGX Xavier with @NVIDIAEmbedded Jetson Xavier NX. Now both cars look fast and clean https://t.co/jlcoY2EjZf
Apr 25
DIY Robocars via Twitter
RT @chr1sa: Yesterday we had a record number of competitors (36) in our monthly @DIYRobocars virtual autonomous car race. We kept it going…
Apr 25
DIY Robocars via Twitter
Apr 16
DIY Drones via Twitter
RT @chr1sa: After more than a year of only virtual races, @DIYRobocars returns to the newly renovated @circuitlaunch on May 22 for the resu…
Apr 11
DIY Robocars via Twitter
RT @DAVGtech: And now available with LiDAR 🔥 https://twitter.com/Heavy02011/status/1381137016381964293
Apr 11
DIY Robocars via Twitter
RT @Heavy02011: #VirtualRaceLeague: @DIYRobocars Race #9 - #ParkingLotNerds #thunderhillracetrack, CA Join us for the next race April 24th,…
Apr 11
DIY Robocars via Twitter
RT @DWalmroth: Weather's finally cooperating, looking forward to racing 1:10 scale autonomous cars outdoors again! @diyrobocars, @NVIDIAEm…
Apr 7
DIY Robocars via Twitter
RT @AIDRI_: I finally succeeded in optimizing the trajectory and speed of a car on a #racetrack. Next step: implement a 2d controller and…
Apr 7
More…