New OpenMV computer vision board, with twice the power

Just announced: the next-generation OpenMV cam with the new M7 processor for twice the power and an even lower price ($55). The original OpenMV is my favorite computer vision processor, and this new one looks even better. Preorders open now

The OpenMV Cam M7 board is our next generation OpenMV Cam. It features 1.5-2X the processing power, 2X the RAM, and 2X the flash of the previous OpenMV Cam. In particular, the increased amount of RAM means we have space to JPEG compress images at a much higher quality. Additionally, the MicroPython heap comes with 64KB more space so you can actually create large image copies now. The M7 is a superscaler processor capable of executing 1-2 instructions per clock. So, algorithm speed-ups will vary. But, it's by default faster at 216 MHz than the M4 core at 168 MHz.

In addition to the processor being better the USB connector now has through-hole stress relief pads so you can't rip it off. We also added another I/O pin and exposed the OV7725's frame sync pin so you can sync up two cam's video streams. To make room for more I/O pins we moved all the debug pins to a special debug header. The debug header is mainly for our internal use to test and program OpenMV Cams but you can hookup an ARM Debugger to it too. Other than that we switched out the IR LEDs for surface mount ones. This change along with a few other fixes will allow us to offer the OpenMV Cam at $65 retail.

But, if you pre-order today you can get the new OpenMV Cam for $55. We'll be taking pre-orders for the next 2 months. We need to add $15K to our coffers to afford production of the new board in a 1K+ build run. So, this means we need about 300+ preorders. If you want to see the OpenMV Cam project move forwards please pre-order the new OpenMV Cam M7.

Now, if we can get a lot more sales than just 300 pre-orders this will allow use to lower the prices for shields too which are built in 100 units quantities. Please go on a shopping storm and let everyone else know about it too! Free free to add anything else in our store to your cart and we'll ship them all together.

Note that the final board will be black. The images are from our prototype. Assuming no obstacles we'll start manufacturing OpenMV Cam M7's at the start of February and they should start shipping in March and we'll continue shipping thereafter.


The OpenMV Cam is a small, low power, microcontroller board which allows you to easily implement applications using machine vision in the real-world. You program the OpenMV Cam in high level Python scripts (courtesy of the MicroPython Operating System) instead of C/C++. This makes it easier to deal with the complex outputs of machine vision algorithms and working with high level data structures. But, you still have total control over your OpenMV Cam and its I/O pins in Python. You can easily trigger taking pictures and video on external events or execute machine vision algorithms to figure out how to control your I/O pins.

The OpenMV Cam features:

  • The STM32F765VI ARM Cortex M7 processor running at 216 MHz with 512KB of RAM and 2 MB of flash. All I/O pins output 3.3V and are 5V tolerant. The processor has the following I/O interfaces:
    • A full speed USB (12Mbs) interface to your computer. Your OpenMV Cam will appear as a Virtual COM Port and a USB Flash Drive when plugged in.
    • A μSD Card socket capable of 100Mbs reads/writes which allows your OpenMV Cam to record video and easy pull machine vision assets off of the μSD card.
    • A SPI bus that can run up to 54Mbs allowing you to easily stream image data off the system to either the LCD Shield, the WiFi Shield, or another microcontroller.
    • An I2C Bus, CAN Bus, and an Asynchronous Serial Bus (TX/RX) for interfacing with other microcontrollers and sensors.
    • A 12-bit ADC and a 12-bit DAC.
    • Three I/O pins for servo control.
    • Interrupts and PWM on all I/O pins (there are 10 I/O pins on the board).
    • And, an RGB LED and two high power 850nm IR LEDs.
  • The OV7725 image sensor is capable of taking 640x480 8-bit Grayscale images or 320x240 16-bit RGB565 images at 30 FPS. Your OpenMV Cam comes with a 2.8mm lens on a standard M12 lens mount. If you want to use more specialized lenses with your OpenMV Cam you can easily buy and attach them yourself.

For more information about the OpenMV Cam please see our documentation.

Applications

The OpenMV Cam can be used for the following things currently (more in the future):

  • Frame Differencing
    • You can use Frame Differencing on your OpenMV Cam to detect motion in a scene by looking at what's changed. Frame Differencing allows you to use your OpenMV Cam for security applications.
  • Color Tracking
    • You can use your OpenMV Cam to detect up to 32 colors at a time in an image (realistically you'd never want to find more than 4) and each color can have any number of distinct blobs. Your OpenMV Cam will then tell you the position, size, centroid, and orientation of each blob. Using color tracking your OpenMV Cam can be programmed to do things like tracking the sun, line following, target tracking, and much, much, more.
  • Marker Tracking
    • You can use your OpenMV Cam to detect groups of colors instead of independent colors. This allows you to create color makers (2 or more color tags) which can be put on objects allowing your OpenMV Cam to understand what the tagged objects are.
  • Face Detection
    • You can detect Faces with your OpenMV Cam (or any generic object). Your OpenMV Cam can process Haar Cascades to do generic object detection and comes with a built-in Frontal Face Cascade and Eye Haar Cascade to detect faces and eyes.
  • Eye Tracking
    • You can use Eye Tracking with your OpenMV Cam to detect someone's gaze. You can then, for example, use that to control a robot. Eye Tracking detects where the pupil is looking versus detecting if there's an eye in the image.
  • Optical Flow
    • You can use Optical Flow to detect translation of what your OpenMV Cam is looking at. For example, you can use Optical Flow on a quad-copter to determine how stable it is in the air.
  • Edge/Line Detection
    • You can preform edge detection via either the Canny Edge Detector algorithm or simple high-pass filtering followed by thresholding. After you have a binary image you can then use the Hough Detector to find all the lines in the image. With edge/line detection you can use your OpenMV Cam to easily detect the orientation of objects.
  • Template Matching
    • You can use template matching with your OpenMV Cam to detect when a translated pre-saved image is in view. For example, template matching can be used to find fiducials on a PCB or read known digits on a display.
  • Image Capture
    • You can use the OpenMV Cam to capture up to 320x240 RGB565 (or 640x480 Grayscale) BMP/JPG/PPM/PGM images. You directly control how images are captured in your Python script. Best of all, you can preform machine vision functions and/or draw on frames before saving them.
  • Video Recording
    • You can use the OpenMV Cam to record up to 320x240 RGB565 (or 640x480 Grayscale) MJPEG video or GIF images. You directly control how each frame of video is recorded in your Python script and have total control on how video recording starts and finishes. And, like capturing images, you can preform machine vision functions and/or draw on video frames before saving them.

Finally, all the above features can be mixed and matched in your own custom application along with I/O pin control to talk to the real world.

Pinout

OpenMV Cam Pinout

Schematic & Datasheets

Dimensions

Camera Dimensions

Specifications

Processor ARM® 32-bit Cortex®-M7 CPU
w/ Double Precision FPU
216 MHz (462 DMIPS)
Core Mark Score: 1082
(compare w/ Raspberry Pi Zero: 2060)
RAM Layout 128KB .DATA/.BSS/Heap/Stack
384KB Frame Buffer/Stack
(512KB Total)
Flash Layout 32KB Bootloader
96KB Embedded Flash Drive
1920KB Firmware
(2MB Total)
Supported Image Formats Grayscale
RGB565
JPEG
Maximum Supported Resolutions Grayscale: 640x480 and under
RGB565: 320x240 and under
Grayscale JPEG: 640x480 and under
RGB565 JPEG: 640x480 and under
Lens Info Focal Length: 2.8mm
Aperture: F2.0
Format: 1/3"
Angle (Field-of-View): 115°
Mount: M12*0.5
IR Cut Filter: 650nm (removable)
Electrical Info All pins are 5V tolerant with 3.3V output. All pins can sink or source up to 25mA. P6 is not 5V tolerant in ADC or DAC mode. Up to 120mA may be sinked or sourced in total between all pins. VIN may be between 3.6V and 5V. Do not draw more than 250mA from your OpenMV Cam's 3.3V rail.
Weight 16g
Length 45mm
Width 36mm
Height 30mm

Power Consumption

Idle - No μSD Card 110mA @ 3.3V
Idle - μSD Card 110mA @ 3.3V
Active - No μSD Card 190mA @ 3.3V
Active - μSD Card 200mA @ 3.3V

Temperature Range

Storage
-40°C to 125°C
Operating
-20°C to 70°C

Views: 2625

Comment by Maxime C on December 4, 2016 at 7:21am

I dont understand, is it constrained to Python?

And is python not a problem for real-time collision avoidance, compared to C++ described as faster?


3D Robotics
Comment by Chris Anderson on December 4, 2016 at 10:47am

The CV libraries are all written in C++. You just script them with the built-in Micopython. 

Comment by Gary McCray on December 4, 2016 at 1:26pm

This is a very impressive (and inexpensive) minimalist vision system with some serious capabilities.

I looked through your site pages and have a few questions that were not obviously answered on the site.

At least I couldn't find them.

Is it true that your board's on board processor chip contains the C++ routines as firmware?

If so is the on board ROM reprogrammable?

And if so, is it your intention to provide firmware updates and / or to enable the user to directly write or add their own C++ routines to that firmware?

Basically can we add or modify the built in CV libraries?

It seems as though this entire system is both open source and open hardware - is that true?

I like the idea of writing simple python scripts to access high level vision routines, but I am pretty sure I will hit limits on the existing ones so am interested in enhancing or writing them from scratch.

Best regards,

Gary

Comment by Cool Dude on December 5, 2016 at 12:28pm

If the firmware was open source, I am sure there would be lot more interest in this than currently present. That will allow a big community to contribute and come up with unique ideas and solutions.

Comment by Jiro Hattori on December 5, 2016 at 3:37pm

Recent consumer drones have flow control and sonar for non GPS environment. We need low cost updated flow control sensor other than old PX4flow. Is MIT licensing a reason not to accept this OpenMV vision sensor for our candidate?  

Comment by iabdalkader on December 5, 2016 at 6:44pm

Hi everyone, I'm the creator of OpenMV.... The OpenMV hardware, firmware and userspace software (IDE and misc utilities) are all opensource, hosted on github. The image processing code is actually implemented in C and can be scripted with MicroPython. The MCU is fully programmable using SWD debuggers and/or built-in DFU (I also wrote my own serial bootloader that works with the IDE). We're constantly working on improving the FW and IDE (contributions/bugs are welcome :) ) however you can write your own firmware if you want. The MIT license is permissive, and we don't mind anyone using our code even on different HW (there are few projects out using parts of the code on different HW) as long as you include a copy of the license to give credit to the project. The design files are also released under a permissive license (CC BY-SA).

Comment by Cool Dude on December 28, 2016 at 8:28pm

 I did not read carefully before that this project is fully open source. I did order V2 from Robotshop and I am pretty happy with it. However I am interested in using 2 of these for stereo vision. Since the V3 will allow two pins to take pictures at the same time, that would be really helpful. Nice product.

Comment by Al B on January 6, 2017 at 1:15am

Hi Chris, you indicated that "OpenMV is your favorite computer vision processor" so I wonder how you have used it.  Do you use for precision landing with the PX4 firmware?  Or, do you use it for sense & avoidance or maybe SLAM? 


3D Robotics
Comment by Chris Anderson on January 6, 2017 at 8:02am

I just use it for line/lane detection on autonomous cars. 

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service