stereopi_2.jpg?width=600

Hello!
I’d like to share with you the details of our last project on Compute Module 3 for playing with stereo video and OpenCV. It could be interesting for those who study computer vision or make drones and robots (3D FPV).

It works with a stock Raspbian, you only need to put a dtblob.bin file to a boot partition for enabling second camera. It means you can use raspivid, raspistill and other traditional tools for work with pictures and video.

JFYI stereo mode supported in Raspbian from 2014, you can read implementation story on Raspberry forum.

Before diving into the technical details let me show you some real work examples.

1. Capture image:

raspistill -3d sbs -w 1280 -h 480 -o 1.jpg

and you get this:

photo_1280_480.jpg?width=600
You can download original file here.

2. Capture video:


raspivid -3d sbs -w 1280 -h 480 -o 1.h264


and you get this:

2buo65.gif?width=600


You can download original captured video fragment (converted to mp4) here.

3. Using Python and OpenCV you can experiment with depth map:

2018-06-01-140326_1824x984_scrot.png?width=600

For this example I used slightly modified code from my previous project 3Dberry (https://github.com/realizator/3dberry-turorial)

I used this pair of cameras for taking the pictures in examples above:
waveshare_pair.jpg?width=600

For video livestream from drone I use wide angle (160 degrees) cameras like this:
waveshare_wide_pair.jpg?width=600

Now – to hardware part.

Front view:
stereopi_front_noted_1280.jpg?width=600

Top view:
stereopi_top_noted_2_1280.jpg?width=600

Dimensions: 90x40 mm
Camera: 2 x CSI 15 lanes cable
GPIO: 40 classic Raspberry PI GPIO
USB: 2 x USB type A, 1 USB on a pins
Ethernet: RJ45
Storage: Micro SD (for CM3 Lite)
Monitor: HDMI out
Power: 5V DC
Supported Raspberry Pi: Raspberry Pi Compute Module 3, Raspberry Pi CM 3 Lite, Raspberry Pi CM 1
Supported cameras: Raspberry Pi camera OV5647, Raspberry Pi camera Sony IMX 237, HDMI In (single mode)
Firmware update: MicroUSB connector
Power switch: Yes! No more connect-disconnect MicroUSB cable for power reboot!
Status: we have fully tested ready-to-production samples

That’s all that I wanted to cover today. If you have any questions I will be glad to answer.

Project website is http://stereopi.com

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • For HDMI we use Auvidea HDMI to CSI2 adapter. Unfortunately Toshiba chip in this adapter has only unofficial support, that's why it is impossible to use two HDMI capture modules at once. We use one HDMI in and one camera to make two independent video streams.

  • Nice project! You write you support HDMI input camer. How do you connect the camera to the board?

  • Of course yes, moving camera is our main usage scenario. The only case is a low-light conditions, you need to set FPS priority while capturing video.

    At that moment we finished hardware part of a project, and now focused on software part. Nearest step is ROS and obstacle avoidance. We prefer to make a practical stress-tests to find all unstable parts in our solutions.

    Auturgy, what is your project on CM3? Did you try to make own PCB or used Pi's devboard?

  • have you tested with a moving camera? The other issue is being able to accurately time stamp the images: there is a difference between when image capture is commanded and when it is actually executed, which is not consistent (between the cameras or between frames for a particular camera).
    For applications such as VIO this creates a lot of noise in the results.
    I’m not trying to be critical: just trying to get a feel for whether I should dig my cm3 out again :)
  • Auturgy, sync question was the main point for us to analyze when we start do develop the first generation of the board on CM1 in 2015. On Pi forums user 6by9 mentioned this question several times in stereoscopic forum thread. There are no ability to absolutely sync them, but usually capturing time differs in just few ms. I was afraid of risk of error accumulation during long-time capture, that's why we did a lot of long-run tests. We make HD image capture, stream it over UDP and then analyze it (usually for 48 hours). We did not find any issues causing depth map artifacts.

    Raspberry Pi Forums - Login
  • Cool! Has been on my to-do list for too long.
    How did you solve the time sync issues between the cameras? I looked at this some time ago. iirc the sync pins on the Pi cam aren’t exposed, and you can’t deterministically trigger the image capture because of how i2c is implemented.
This reply was deleted.