Hey all, remember OpenMV Cam? It's the machine vision module that runs Python.
Well, we finally launched it on Kickstarter http://bit.ly/openmvcam. It does color tracking, object tracking, records movies and photos, too. All scriptable with Python. And it supports shields (think WiFi, LCD, IMU, ...) It has IR LEDs. M12 lens mount. STM32F427 @ 180MHz.
I'd just use libDTMX, plus it has python bindings if you want to use that.
@unnamed idea - OpenCV is a bit heavyweight for an embedded processor like a Cortex M4F.
@Simon, that sounds really interesting. I'm guessing, yeah, we'd need to write some code to recognize specific markers unless there's some way to use FAST/FREAK to do it.
I'm very excited to see this project hit kick-starter, it's going to be so useful for many projects.
One idea I have (if anybody would like to collaborate) is to use the optical tracking (of barcode like markers) to provide yaw compensation and distance tracking, this could then be combined with gyro-acc fusion to provide a self contained AHRS for indoor drone use and maybe positional tracking for VR.
Perhaps this could output fake GGA messages (in it's own co-ordinate frame) to easily interface with existing flight controllers.
Obviously this is more that can be achieved solely in python, but microPython/OpenMV do allow the linking of other C libs. The two I am thinking of are:
Anyway, just some thoughts.
SAR = Synthetic Aperture Radar
Search and Rescue = S&R
I'll let you off this time, but don't let it happen again.
Hey Bob, good luck with the campain. I have a question (I'm not familiar with cortex family) : can we install and use openCV with your module/ software architecture? and one more: if so, why dont you use openCV by default?
Thanks all for the support, everyone! Let me answer a few questions...
@Mark: Frame rate varies depending on what it's doing and what the resolution is. We'll get some more performance figures posted. We don't have a video out pin but we do have a DAC pin. Are you thinking FPV? Would that be useful for drones given it's only a 2MP camera? Would be happy to discuss offline.
@Euan: Regarding color searching, the way it works is you would run the IDE with the framebuffer viewer script, then hover your mouse over the color of interest, and tune the rgb parameters and threshold to get a reliable match. If you want to talk offline about specific needs, I'm all ears. :) Also, the IDE comes with example code for its various capabilities as a starting point. :)
@Burraak: I'll test and post FPS specs on the KS with the new 180MHz processor as soon as I can, ok? (we were using the '407 before, 168MHz). I'll try to echo the data here, too. We're using a 2MP JPEG sensor.
Niiice... backed! =)
So all in all. We use to use up to 32 different sensor out there,
but now with high speed processing, a study of a picture eliminates the need for most of these sensors.
and with the code been available at open source level.
I have been watching for a few year with interest to see what standards form around it.
One thing I have been interested in is that an eye is just receptive to a frequency that is then interrupted as light and colour.
So instead of trying to build devices that emit light that it just become a finite Arial that emits that frequency.
Also in the other hand a flat panel Ariel that picks up those frequency.
So is the tech jump an optical problem or an RF problem.
Jake - SAR = Search and Rescue.