Hey everyone,

I want to share nice project I recently found on Youtube. It’s IR 3D Scanner Robot made by the group of students leaded by Nicolas Douard.  It turned out the project has amazing report containing step-by-step building guide, bill of materials, drawings, software and demos.

 

Here is the short overview:

The goal of the team was to perform 3D reconstruction using mobile embedded systems. The developed solution should be capable to generate the 3D model of indoor spaces.

The vehicle uses Actobotics standard platform. This base offers great traction and climbing ability (37°approach angle) and can easily handle heavy loads. Vehicle electronic components are mounted underneath the plate and includes the radios, Raspberry Pi + Navio 2, DC motors and its driver (drawings and 3d models are available in project's report).

 

A Raspberry Pi board is associated to a Navio 2 shield running ArduPilot for navigation Control. The Navio2 board is connected with Spektrum DSXM receiver and 433 MHz radio, moreover it generates a PWM signal for each motor. This signal goes to the motor controller via servo wires.

 

BKLvNCl3EidGs2poO0qC9jIYGkeZ_GgRKGgmQLrINX7idRdHEhHgpMsQu4sHaE2JLQokH972Fc_a9_ATzUWcq4T6mJJBDfqnyUxhJHpVKAfrQfq_sUSMS2LV9qqCPiNVqwPx_49B

 

The space above is left available for IR 3D scan system (Kinect 360). 3D reconstruction is resource expensive and requires a very powerful GPU/CPU combo. This makes it impossible to run 3D reconstruction on cheap embedded hardware like a Raspberry Pi

board. The data is therefore logged on the vehicle and later transferred to a remote computer that is to perform resource expensive 3D reconstruction (using Nvidia GTX 1080 GPU (8192 MB) and Intel Core i7 6800K CPU (6 cores, 3.40 GHz)).




CHJfIKrPcHacsklw_shhQY_qeN9g8MFkTrOG-PHYJI_i3fao7BHnJ06iqZKJ12LbqEtwlIU6U67iRXpM-VWoPoTfN7NjYM2Bfg04ZOfAVqQWA5REVz1lwpGVanVfbgaH5w6jNRYw

 

The IR 3D scan module’s primary goal is to log RGB and depth data which is to be used for 3D reconstruction. RGB-D frames capture is handled by Logger1 modified to run via SSH.

For processing ElasticFusion was used. It’s a real-time dense visual SLAM (Simultaneous Localization And Mapping) system capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera.  The capture and the reconstruction processes can be separated thanks to dedicated loggers which collect raw data that is packed into a KLG file. This file can later be processed by ElasticFusion in order to generate a 3D model.

ElasticFusion exports MeshLab compatible data. MeshLab cleans and optimizes meshes and can export reconstructed 3D models to more common formats like STL or 3DS. An optimized model in this format can then be imported into 3D modeling software such as 3ds Max and manually modified to be, for example, suitable for 3D printing.

 

wrtaftAx5UbW-HN_KGTQ8mWeGIyTnctC9Qoe7JJAAqSGg4kn6oTHJcK3WVUn6KLjQhruy1a7GVHg_gi6RGxT8QVnewFs4QlSQQaqowlLUJ05M3whv8WhaXmAsKOEsHVcbsL-daxt

Electric kart reconstructed point cloud opened in MeshLab



6dKgkLIXjD9zCZ0SiybiFX7mV3Cl3UE9DZK-su7JjR4JAQbCY0AF7rmBZu94zvXQMLgWDSifTzGizbndkvXKf_8QxaohPAjSnEEBM_470EIwXtFIG77FMb4onjvGf_60CXQd23SS

ElasticFusion running with GUI




An extensive documentation of the Project and Embedded control system software may be found in the ScanBot 3D repository:

 

https://github.com/QBitor/ScanBot_ECS





E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • great project, i will implement it.

This reply was deleted.