Hello!  I am a developer and interested in Obstacle avoidance.  Could someone direct me to a group or individual working in this field?  Is anyone working with LIDAR, optical sensors, and the like for obstacle avoidance and path planning?  I’d like to help.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Thanks for the the info :)

    Todd Hill said:

    @scott, check out Ardupilot.org, as that is where you will find the lead developers of the project roaming.  Maybe paste your post here, and here.  Those are the perhaps the two biggest developer discussion forums you will find online for open source AUV/UAS.  I think both communities could greatly use, and benefit from your skills/interest.:-) Cheers!

  • Developer

    I suggest joining the conversation at https://gitter.im/ArduPilot/ardupilot/VisionProjects and https://gitter.im/ArduPilot/Research

    There are a number of people and a number of companies active in there.

  • Moderator

    I currently and using a DJI Matrice 100 and Guidance system from DJI, along with their Manifold Computer (Nvidia TK1).  It was provided to me by DJI as part of the DJI Developer Challenge, and I have to say it works very well.   The guidance sensor detects obstacles using a combination of sonar sensors and stereo cameras.  To make it navigate a path around obstacles all of the programming is done with ROS.  The guidance sensors operate as ROS topics.  It is easy to create a point cloud with the stereo cameras.  I used python scripts and some open source C++ code to navigate the path.

  • Todd,

    video camera vs. lidar in object sensing, object avoidance.

    Lidar is generally a single point scanning device.

    In theory, via optics, a larger area can be scanned at once to emulate radar, but you need a much stronger flasher

    and the first signal received comes from the closest obstacle withing scanned area ( sphere).

    So to scan a sphere  (FOV) you need to integrate lidar with gimbal.

    A single video camera offers such functionality at once, generating depth maps on the fly.

    No need for stereo vision (stereoscopy).

  • @scott, check out Ardupilot.org, as that is where you will find the lead developers of the project roaming.  Maybe paste your post here, and here.  Those are the perhaps the two biggest developer discussion forums you will find online for open source AUV/UAS.  I think both communities could greatly use, and benefit from your skills/interest.:-) Cheers!

    ArduPilot Open Source Autopilot
    The most advanced open source autopilot for use by both professionals and hobbyist. Supports multi-copters, planes, rovers, boats, helicopters, ante…
  • Hi Ben,

    I never did get very far with this.  Was looking for a group where I could contribute a bit and learn, didn't find one, then work got pretty busy.

    I had a look at the DJI Guidance SDK website and feel that documentation is lacking.  Not saying it isn't a good platform or capable software, but documentation is important.  There was nothing I saw that indicated there is an easy to use programming interface suitable for a hobbyist or professional who does not spend a lot of time programming.  Not sure what your time line is but seems to me the DJI Guidance SDK route could be a long time before you actually get to take some pictures.

    Regards,

       Scott

    Ben Nyberg said:

    Hi Scott!

    I'm not a software developer but I'm very interested in working on Obstacle avoidance.

    The project I'm working on is surveying for rare plants on vertical cliffs in Hawaii. 

    Ideally, I would be able to set a distance to the surface, which the UAV could then hold as I flew vertically taking photos for photogrammetry software.   

    I've looked into the DJI Guidance SDK with the intention of purchasing the DJI Mavic Pro, but as I mentioned before its a little over my head.

    Any input/help you might have would be greatly appreciated? 

  • Since the targeted surface is soft ( plants are grown) LIDAR can fool you (data missing)

    twin-camera or single camera stereoscopic vision to generate depth maps on-the-fly to detect closest objects is the most advanced technolopgy today to work in DIY drones world.

    You need to scan a sphere with a LIDAR or camera. Camera scans a sphere by default, LIDAR requires moving head to get the same results (FOV).

    3D point cloud generated can be on-the-fly processed (open libraries provided) to extract volume objects based on pattern recognition, 3D objects libraries to generate obstacle free spaces for smart flying.

    You need to install companion computer (Intel or others) to process video frames on-the-fly.

    You can test your obstacle avoidance video system on desktop computer, laptop, tablet at home.

    LIDAR may not be suitable for obstacle avoidance in case of soft objects (plants).

  • Hi Scott!

    I'm not a software developer but I'm very interested in working on Obstacle avoidance.

    The project I'm working on is surveying for rare plants on vertical cliffs in Hawaii. 

    Ideally, I would be able to set a distance to the surface, which the UAV could then hold as I flew vertically taking photos for photogrammetry software.   

    I've looked into the DJI Guidance SDK with the intention of purchasing the DJI Mavic Pro, but as I mentioned before its a little over my head.

    Any input/help you might have would be greatly appreciated? 

  • Sure thing. Tell us more about what you want to do. We're designing a whole range of long range sense-and-avoid LiDARs for for drones.

    • Well… I am a mechanical engineer and longtime C++ developer.  At my job I write code to interact with networked A/D devices to gather vibration signals.  I spend a lot of time writing fault tolerant network code, but once I do receive data I provide the signal processing for my company.  So I’ve a pretty sound proficiency in digital signal processing.  Back in my college days as a research assistant I built a digital holography system and based my master’s thesis on it entitled “Digital Holography and Noise Tolerant Phase Unwrapping”.  The point being that writing code to interact with hardware and process real world data is in my wheelhouse.  Like many people, I have been drawn to the emerging drone revolution and want to participate in its development.  I feel that Autonomy and Obstacle Avoidance is one place where there is room to make a meaningful contribution.  I’m not sure where you’re at in your development but I could see myself working with just the sensing device, such as a scanning LiDAR, to develop code for perhaps building a world model for path planning, or for reactive avoidance. 

      If you have a code base started I’d consider diving into a section needing refinement, or maybe you’re at an earlier stage and considering various techniques to base a code upon, in that case we might talk about a published paper that the group is interested in following.  But to answer your question about what I want to do; I want to use the skills I have to help further development in UAV Autonomy and obstacle avoidance.

This reply was deleted.

Activity

DIY Robocars via Twitter
RT @chr1sa: Just a week to go before our next @DIYRobocars race at @circuitlaunch, complete with famous Brazilian BBQ. It's free, fun for k…
22 hours ago
DIY Robocars via Twitter
How to use the new @donkey_car graphical UI to edit driving data for better training https://www.youtube.com/watch?v=J5-zHNeNebQ
Nov 28
DIY Robocars via Twitter
RT @SmallpixelCar: Wrote a program to find the light positions at @circuitlaunch. Here is the hypothesis of the light locations updating ba…
Nov 26
DIY Robocars via Twitter
RT @SmallpixelCar: Broke my @HokuyoUsa Lidar today. Luckily the non-cone localization, based on @a1k0n LightSLAM idea, works. It will help…
Nov 25
DIY Robocars via Twitter
@gclue_akira CC @NVIDIAEmbedded
Nov 23
DIY Robocars via Twitter
RT @luxonis: OAK-D PoE Autonomous Vehicle (Courtesy of zonyl in our Discord: https://discord.gg/EPsZHkg9Nx) https://t.co/PNDewvJdrb
Nov 23
DIY Robocars via Twitter
RT @f1tenth: It is getting dark and rainy on the F1TENTH racetrack in the @LGSVLSimulator. Testing out the new flood lights for the racetra…
Nov 23
DIY Robocars via Twitter
RT @JoeSpeeds: Live Now! Alex of @IndyAChallenge winning @TU_Muenchen team talking about their racing strategy and open source @OpenRobotic…
Nov 20
DIY Robocars via Twitter
RT @DAVGtech: Live NOW! Alexander Wischnewski of Indy Autonomous Challenge winning TUM team talking racing @diyrobocars @Heavy02011 @Ottawa…
Nov 20
DIY Robocars via Twitter
Incredible training performance with Donkeycar https://www.youtube.com/watch?v=9yy7ASttw04
Nov 9
DIY Robocars via Twitter
RT @JoeSpeeds: Sat Nov 6 Virtual DonkeyCar (and other cars, too) Race. So bring any car? @diyrobocars @IndyAChallenge https://t.co/nZQTff5…
Oct 31
DIY Robocars via Twitter
RT @JoeSpeeds: @chr1sa awesomely scary to see in person as our $1M robot almost clipped the walls as it spun at 140mph. But it was also awe…
Oct 29
DIY Robocars via Twitter
RT @chr1sa: Hey, @a1k0n's amazing "localize by the ceiling lights" @diyrobocars made @hackaday! It's consistently been the fastest in our…
Oct 25
DIY Robocars via Twitter
RT @IMS: It’s only fitting that @BostonDynamics Spot is waving the green flag for today’s @IndyAChallenge! Watch LIVE 👉 https://t.co/NtKnO…
Oct 23
DIY Robocars via Twitter
RT @IndyAChallenge: Congratulations to @TU_Muenchen the winners of the historic @IndyAChallenge and $1M. The first autonomous racecar comp…
Oct 23
DIY Robocars via Twitter
RT @JoeSpeeds: 🏎@TU_Muenchen #ROS 2 @EclipseCyclone #DDS #Zenoh 137mph. Saturday 10am EDT @IndyAChallenge @Twitch http://indyautonomouschallenge.com/stream
Oct 23
More…