Here is Revision 2 of the Companion Computer  System Architecture. Thank to JB for this one !!

This is the latest revision of the Companion Computer Architecture.

After some more discussion regarding the overall structure, we thought it would be a good idea to incorporate the various parts of the the complete system typically used in flying a UAV.

Please note that some of the items are optional or redundant and can be left out as indicated by a *. Ideally the RF link comprises of only one link, most likely over wifi, however for redundancy and compliance purposes other connections are shown.

This is composed of 4 main building blocks:

  1. FC: Flight Control - This sub-system includes the RTOS based autopilot, telemetry radio*, RC receiver* and peripherals

  2. CC : Companion Computer - This sub-system includes higher level Linux based CPU and peripherals

  3. GCS: Ground Control Station - This sub system is the user interface for UAV control - This typically includes PC/Linux/iOS and Android based platforms that can communicate via telemetry radio, wifi or LTE/3G/4G

  4. MLG: Multi Link Gateway - This is an optional system for use on the ground to provide connectivity with the CC and FC. It can also be used as a local AP, media store and antenna tracker etc.

The FC is connected as follows:

  • via RC receiver* to the remote control

  • via Telemetry* radio to the GCS

  • via UART or Ethernet  to the CC

  • via FC IO to peripherals like ESC, servos etc.

 

The CC is connected as follows:

  • via WLAN to the GCS and/or MLG

  • via LTE/3G/4G* modem to the GCS and/or MLG

  • via UART or Ethernet to the FC

  • via CC IO to HD peripherals, like USB or CSI camera etc

 

The GCS is connected as follows:

  • via telemetry* radio from the FC

  • via MLG WLAN or MLG AP or direct from the CC AP

  • via MLG or direct LTE/3G/4G* through the internet or PtP

  • control of tracking unit on the MLG*

  • various peripherals like joystick, VR goggles etc

 

Views: 7663

Replies to This Discussion

Patrick,

My wifi dongle returns this when I type "lsusb".  Is there a better way to get it to say what brand it is?

Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN Adapter

To connect to udp, we can make the connection string in balloon_finder.cnf something like this:

connection_string = udpout:localhost:14550  ("localhost" can be replaced with an actual IP address if necessary)

This is actually what I've had to do to allow splitting the MAVLink stream so that it flows both to the GCS and DroneKit (and mavproxy is doing the split).  To mavproxy now connects to the usb-to-serial device and splits outputs the stream to two separate UDP addresses.  The local udp address is used by DroneKit.  The non-local one sends the stream to the GCSs.

Thanks Randy ,

I just wanted to double check that the the FC could be an UDP connection and not limited to serial, because i see the baudrate is passed as a parameter.

Concerning the wifi dongle, please take caution with this chipset, because this is not certified as AP capable , look herehttps://wireless.wiki.kernel.org/en/users/drivers . This is the reasonI asked the question, I tried with my own RTL8188cus, just to check if the installed rtl871drv could do ''magic'' but it kept crashing under heavy video traffic.  Now you know why so many experimenters drops the wifi dongle to go to a dedicated router :-). Finding a fully compliant AP chipset is not an easy task, because it is not written in most of the specs and with the arrival of the 802.11AC stack it made thing real messy.

Hello Randy,

Made some testing on my side with openCV and c++, building a tracker with added functions , including the 

 cvHoughCircles === This is what can differentiate a balloon from a patch of red dirt ... or a red car :-)

// Covert color space to HSV as it is much easier to filter colors in the HSV color-space.
cvCvtColor(frame, hsv_frame, CV_BGR2HSV);
// Filter out colors which are out of range.
cvInRangeS(hsv_frame, hsv_min, hsv_max, thresholded);
// Memory for hough circles
CvMemStorage* storage = cvCreateMemStorage(0);
// hough detector works better with some smoothing of the image
cvSmooth( thresholded, thresholded, CV_GAUSSIAN, 9, 9 );
//hough transform to detect circle
CvSeq* circles = cvHoughCircles(thresholded, storage, CV_HOUGH_GRADIENT, 2,
thresholded->height/4, 100, 50, 10, 400);

And I can output 6.5 readings per seconds, with python,  cvHoughCircles makes 1 frame per 4 seconds.

I just ordered a RPI3 and hopefully we will get 10 FPS without modification.

So what I suggest is that we run the object detection on c++ and publish object position as x-y coordinates + time stamp on a UDP port.   What do you think ?

On another subject, I just tested the TPLINK TL-WN722N usb adapter. This is a very interesting wifi dongle, fully nl80211 compliant , and has a removable antenna Rp-SMA that you can fit all sorts of configuration, It just run on 2,4Ghz so its a gcs-joystick configuration for just 10$

This is the RP3 running openCV  C++ using a USB  MJPG camera at 640x480 fitted withh and a fisheye lens

The code is optimized to reduce false positives on the Histogram and the  cvHoughCircles..

Speed: 7.6 Fps

Comments are welcome :-)

Looking good.  The 7.6FPS is better than I'm getting with the balloon-popper on the RPI2.

I had very bad luck with the cvHoughCircles though when I used it originally. It seemed to be easily thrown off by the sun hitting the balloon (which makes part of the balloon appear white instead of red).

Yep,

Some additional info:

There is no major gain with the RP3 because:

- The OS is the same , so its not using the 64 bit core

- The overclocking option is disabled... We can overclock the RPI2 without problem

-The GPU driver is the same, so no real optimisation

-The CPU is running at 27%, so one core fully loaded, I might get additionnal fps gain by multithreading (next lab)

Did some test with UDP and it is quite interesting, I really think that  we should split the Image Process from the Controller. This way , we can substitute the  devices and/or the  programs, as an example, it is possible to interface a RPZero with a Pixy Cam and get pretty good results !!  I really need your feedback on this.

Ok, so the red-balloon-popper should be using the python "processing" features which should end up with it running on multiple cores.  That was working on Odroid and I had assumed it was working on the RPI2 but I haven't specifically looked at "top" to make sure all cores were being used.

In the red-balloon-popper code the image processing is somewhat split from the controls.  So the find_balloon.py script's analyse_frame() function does most of the OpenCV work while balloon_strategy.py is the "main" program and also implements the controls (i.e. search_for_balloon and move_to_balloon functions).  It could be divided up more though for sure and perhaps you're referring to separating them into separate processes?

The PixyCam is really good and super fast.  The IRLock sensor is really a PixyCam so we already have a driver for it in Ardupilot.  It could certainly be used to implement a red-balloon-finder but it seems to me it's not a very general solution.  It can find a single type of object but it doesn't capture the video and modifying the algorithm would mean diving into the PixyCam code which I'm pretty sure is quite different from programming a companion computer running Ubuntu (or whatever).  Also the next big step seems to be "Deep Learning" (whatever that means) which I think is going to require a really beefy companion computer (like the nVidia).

Randy,

Bear in mind that i am NOT working on Python at this stage it is in C++.  The python multithreading seems to wok with the RPI but the find_balloon.py is not fast enough ... as you know. So, I am trying to speed it up by implementing find_balloon in C++ but to do so, I am looking to a simple and elegant way to get the 2 environment work together on multiple platform and the only way I found logical is by using UDP. Otherwise you have to work with shared memory , shared pipes, shared files, and if anyone can show me the benefit over UDP, please be my guest.

This way we can "publish" the followings, as specific private UDP or as a  -pose topic under ROS  (like the Bebop drone) or a MavLink navigation command, (or else):  

balloon_found
balloon_x
balloon_y
balloon_radius

So, the video process would integrate: colour_finder.py + find_balloon.py + balloon_video.py + balloon_utils.py and the associated configurations file (width, height, lens(FOV), color filter ,balloon diameter, etc.). Technically I would generate a vector, corresponding to a  lens compensated projection with an estimated distance based on a known balloon diameter.

There is a requirement to add a kalman filter so that  the  vector and the distance are stable enven if the detection algorithm  randomly  jumps around the object.

Yes agree with the ''Deep Learning'' stuff, the DARPA  truck on the Nvidia video is quite impressive. At the rate i am ordering computers these days, I could certainly get a brand new TX1 within a month :-) ... I am waiting for release 2 of Jurgen's  baseboard ... anyway the v4l2 driver is not released yet...here's Dusty message on the developers forum:

Sorry, the update was pushed out to include critical kernel, DVFS and perf fixes. Current ETA is late Feb/March.

...so for the moment we would  end up with a "blind"  20 watts  number cruncher!!! 

Hi Patrick!!

I finally had some time to make some tests using the rpi 3 as well... And I decided to lower the resolution. Allow me to explain why this might work on my application...

My objective is a bit different than yours, but quite similar. I will make a vehicle hover over this roomba robot and then the roomba robot will start moving, and the quadcopter will track the movements of the roomba. Kinda like the idea of a missile but with no destruction (hopefully...) and never approach on Z... hehehe

My color finding algorithm (https://github.com/alduxvm/rpi-opencv) is working at 8.3hz, when running at VGA resolution, just 1hz faster than yours :P

Doing a position controller, or hover controller, gets very problematic when having an update rate of less than 10hz for the feedback sensors... Is doable but not recommended. so, our 7-8hz running at 640x480 will not be enough. So, if we reduce the resolution to 320x240, we can have stable 20hz, take a look here: https://www.youtube.com/watch?v=pu_9DGT2qO0 

But there is a problem with lowering resolution... is harder to get a signature. In my case, I want to follow the roomba, and this two (roomba and multirotor) will be apart 1 meter, and after doing tests... the rpi is able to properly "see" the roomba!! :D

But maybe for the ballon finder, the signature will be to low that is not going to be able to find it :( (I recommend to test!).

So, next step is to do the tee stuff to send video to ground station and be able to see what the vehicle sees... I have done this in the past: https://altax.net/blog/low-latency-raspberry-pi-video-transmission/ with this approach is even possible to fly FPV small racers!!

Ok, so, raspvid to a fifo file, then netcat to send it to a computer... the problem is doing the computer vision... I cannot find yet a solution to read a fifo file using opencv and python... Maybe someone has??? 

If we are able to read a fifo file on python opencv, then we can have a command on the background reading the camera with raspivid and teeing to netcat and python... Has anyone got closer with this?? 

Hola Aldo

Good to have an update on your side.
Color histograms are quite fast and can do impressive stuff on a controlled lighting environment. Chasing the roomba is agreat project and it makes it easier considering you work x-y and leave z as a constant. Then you can add the heiht control by getting the controler to keep the roomba size (blob , houghcircle or else) at a specific size

Cpncering the TEE , you can use v4l2loopback that works with gstreamer under gstreamer release 0.10 you can take at my blog balloon finder in the companion computer forum

Best regards

This is a side project to test on the Raspberry Pi Zero capability to process APM as a standalone and as a companion computer as well: http://diydrones.com/profiles/blogs/mini-zee-a-100-diy-smart-drone-...

That's great that you got beta tower going. I tried a while ago with UDP video, but couldn't connect. I will try again.

RSS

© 2020   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service