Hello Companion Experimenters
Based on Randy's Red_Balloon_Finder project, this is the implementation of this fascinating project on a Raspberry Pi 2.
This is the perfect companion computer project and it covers the whole spectrum of the autonomous flight:
1- High level processing language: Python + OpenCV
2- Full integration of the Companion Computer (CC) with Flight Control (FC) and Ground Control Station (GCS) using MavProxy on a mix of serial and UDP communication
3- Integration of the Balloon_Finder with Software In The Loop (SITL)
4- Usage of Drone API allowing all sort of automated flight control
5- Perfect introduction to OpenCV using a Drone Specific Application that is relatively easy to program and configure with PYTHON
6- Creation of a complete standardized system image dedicated to this application so it can be used by all the interested experimenters as a training tool to get into vision controlled autonomous flight
Here is the proposed system:
It is based on the Companion Computer Architecture described here .
This is composed of 3 major building blocs:
1- FC: Flight Control , this is basically the Autopilot
2- CC : Companion Computer, which process the high level computation
3-GCS: Ground Control Station, this is Mission Planner, or QcGroundControl , mavlink --console or any GCS
The FC is interfacing with these signals:
A) Manual or Backup control links from the Radio Control. Transmission type: PWM,PPM,sBus, etc. Freq. 2,4 Ghz
B)Telemetry & Control to and from CC using UART . Signal type: Mavlink. Direct-Connect.
C) Interface with various sensors and actuators: PWM - UART - I2c - SPI - USB -GPIO
The CC is interfacing with these signals:
A) Main DOWNLINK to GCS. Transmission type WIFI, LTE, Other. Freq. 5 Ghz (WIFI). Bandwidth (6 - 600 Mbps)
B)Telemetry & Control to and from FC using UART.
C) Interface with Camera Using USB2 for Logitech or CSI2 for RasCamera
The GCS is interfacing with these signals:
A) Main DOWNLINK to from CC.
B) Optional connection to Internet using LTE or HotSpot
C) On the computer side, numerous inearfaces can be connected:
i. Immersive googles
iii. Gesture Interface (embedded tablet IMU)
iv You Name It
Yes I had found that blog. But this appears to destroy my wlan0 and it no longer connects to my AP even though the /etc/network/interfaces settings haven't changed.
I do what the rPi to bridge the eth0 (192.168.0.*) network and the wlan0 (192.168.178.*) networks. But I don't know if standard bridging will allow my workstation on on the (192.168.178.*) network to talk to a device on the (192.168.0.*) network.
I am curious to know if instead of using the baloon popper code we can use a similar setup as described in this post for following designating targets in the live video frame. Something like follow me but instead of using a GPS to track the target position, we use the RPI to calculate the relative position of the target from the drone/camera and accordingly send velocity requests to the drone in guided mode to follow the target
Yes it is possible, but on the actual state of development you will have to carry a red balloon with you !!
More advanced type of object recognition will be implemented in the future releases, but I must admit that for human tracking from above, we will need more processing power. To accomplish this you need to have the computer ''learn'' the matching pattern like openCV HOG descriptor, see this excellent blog here that explain the principle, but I doubt that there are much models to match a human from above. But it will be possible by using a Nvidia Tx1 running Deep Learning stuff but, this project will be in a different Companion Computer Forum and with a much higher price tag :-)
I've been off consumed with various things for a couple of weeks but this is still very much on my mind.
Do you think this v4l2loopback would work with the RPI2 camera or would it only work with a web cam? I think the rpi camera doesn't appear as a device for some reason.
Me too, as you know I took some days off to get my hands into the MICRO-Zee project, it helps sometimes to switch you brain to a different project :-)
I'll give it a try, basically you set the RaspiCam as a v4l2 compatible device by loading this driver in kernel sudo modprobe bcm2835-v4l2, ( you can make it permanent by adding it into /etc/modules)
The major drawback with v4l2loopback is that it is working just with gst-0.10, so you can expect all sort of limitations and compatibility problems. So it is a solution with a very narrow set of options, but it can do the job.
It seems there is a bug in agreeing supported resolutions between the raspicam v4l2 driver and gstreamer. You can find more info on official RasPi forum. Thanks to great developers at Raspberry Pi Foundation there is also a workaround/fix for that.
When loading a driver just add a "gst_v4l2src_is_broken=1" flag, like this:
sudo modprobe bcm2835-v4l2 gst_v4l2src_is_broken=1
Here is an example of raspberry pi camera v4l2 driver mapped on video0 that is streaming using gstreamer on a v4l2-loopback virtual video1 device:
gst-launch -v v4l2src device=/dev/video0 ! ffmpegcolorspace ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! v4l2sink device=/dev/video1
And then, we opened 2 concurrent opencv sessions, both feeding from same source :virtual mapped video1
total processor : 47% each opencv sessions utilizes 88% of a core.