Eugene Pomazov's Posts (3)

Sort by

Stereoscopic systems widely used in drone navigation, but this project is using a new approach - variable baseline.

In the related (PDF) the team showcases three different applications of this system for quadrotor navigation:

  • flying through a forest
  • flying through an unknown shaped/location static/dynamic gap
  • accurate 3D pose detection of an independently moving object

They show that their variable baseline system is accurate and robust in all three scenarios.

For the video capture, the Raspberry Pi-based StereoPi board was used. Additional AI-acceleration hardware (Intel Movidus) is considered as a next step, as well as using a more powerful CM4-based version of the StereoPi (v2).

Here is the brief video of the project:


Read more…

StereoPi: preparing for the batch production

Since my last blog post on June, 29, we have a lot of news.


1. We chose a factory for the batch production.

2. First factory 20 prototypes were assembled and passed all tests!


3. We're also conducted some additional experiments:

  • Livestream 3D video over external videolink:


  •  ROS depth map building


TL;DR for ROS implementation is here

  •  YouTube stereoscopic livestream over LTE dongle


  • Livestream to Oculus Go (3rd person view like in computer games)


Oculus experiment details are here

  • Front view/rear view livestream from our crawler 


  • 360 degree panoramic photo


TL;DR step-by-step experiment description is here

4. We plan to start crowfunding campaign in the nearest weeks.

You can subscribe to our crowdfunding updates and campaign start reminder.

If you have any hardware or software questions, I'm ready to answer them. 

UPD> Thanx Tomas for reminder in his comment - I forgot to put a link on a project's site:

Read more…


I’d like to share with you the details of our last project on Compute Module 3 for playing with stereo video and OpenCV. It could be interesting for those who study computer vision or make drones and robots (3D FPV).

It works with a stock Raspbian, you only need to put a dtblob.bin file to a boot partition for enabling second camera. It means you can use raspivid, raspistill and other traditional tools for work with pictures and video.

JFYI stereo mode supported in Raspbian from 2014, you can read implementation story on Raspberry forum.

Before diving into the technical details let me show you some real work examples.

1. Capture image:

raspistill -3d sbs -w 1280 -h 480 -o 1.jpg

and you get this:

You can download original file here.

2. Capture video:

raspivid -3d sbs -w 1280 -h 480 -o 1.h264

and you get this:


You can download original captured video fragment (converted to mp4) here.

3. Using Python and OpenCV you can experiment with depth map:


For this example I used slightly modified code from my previous project 3Dberry (

I used this pair of cameras for taking the pictures in examples above:

For video livestream from drone I use wide angle (160 degrees) cameras like this:

Now – to hardware part.

Front view:

Top view:

Dimensions: 90x40 mm
Camera: 2 x CSI 15 lanes cable
GPIO: 40 classic Raspberry PI GPIO
USB: 2 x USB type A, 1 USB on a pins
Ethernet: RJ45
Storage: Micro SD (for CM3 Lite)
Monitor: HDMI out
Power: 5V DC
Supported Raspberry Pi: Raspberry Pi Compute Module 3, Raspberry Pi CM 3 Lite, Raspberry Pi CM 1
Supported cameras: Raspberry Pi camera OV5647, Raspberry Pi camera Sony IMX 237, HDMI In (single mode)
Firmware update: MicroUSB connector
Power switch: Yes! No more connect-disconnect MicroUSB cable for power reboot!
Status: we have fully tested ready-to-production samples

That’s all that I wanted to cover today. If you have any questions I will be glad to answer.

Project website is

Read more…