Proposal for companion system architecture

Here is Revision 2 of the Companion Computer  System Architecture. Thank to JB for this one !!

3691274070?profile=original

This is the latest revision of the Companion Computer Architecture.

After some more discussion regarding the overall structure, we thought it would be a good idea to incorporate the various parts of the the complete system typically used in flying a UAV.

Please note that some of the items are optional or redundant and can be left out as indicated by a *. Ideally the RF link comprises of only one link, most likely over wifi, however for redundancy and compliance purposes other connections are shown.

This is composed of 4 main building blocks:

  1. FC: Flight Control - This sub-system includes the RTOS based autopilot, telemetry radio*, RC receiver* and peripherals

  2. CC : Companion Computer - This sub-system includes higher level Linux based CPU and peripherals

  3. GCS: Ground Control Station - This sub system is the user interface for UAV control - This typically includes PC/Linux/iOS and Android based platforms that can communicate via telemetry radio, wifi or LTE/3G/4G

  4. MLG: Multi Link Gateway - This is an optional system for use on the ground to provide connectivity with the CC and FC. It can also be used as a local AP, media store and antenna tracker etc.

The FC is connected as follows:

  • via RC receiver* to the remote control

  • via Telemetry* radio to the GCS

  • via UART or Ethernet  to the CC

  • via FC IO to peripherals like ESC, servos etc.

 

The CC is connected as follows:

  • via WLAN to the GCS and/or MLG

  • via LTE/3G/4G* modem to the GCS and/or MLG

  • via UART or Ethernet to the FC

  • via CC IO to HD peripherals, like USB or CSI camera etc

 

The GCS is connected as follows:

  • via telemetry* radio from the FC

  • via MLG WLAN or MLG AP or direct from the CC AP

  • via MLG or direct LTE/3G/4G* through the internet or PtP

  • control of tracking unit on the MLG*

  • various peripherals like joystick, VR goggles etc

 

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • This is a side project to test on the Raspberry Pi Zero capability to process APM as a standalone and as a companion computer as well: http://diydrones.com/profiles/blogs/mini-zee-a-100-diy-smart-drone-...

    3702207011?profile=original

  • Developer

    3702191967?profile=originalI made some concrete progress today by getting the RPI's video and the Pixhawk's telemetry flowing into Tower.

    The way the telemetry works is it flows from the Pixhawk -> RPI2 (appears as /dev/ttyUSB0) -> mavproxy.  Mavproxy then splits it (see new mavproxy_telem_splitter script) and publishes it on two (or more) UDP ports.  One UDP port is local to the RPI2 and allows dronekit to get the telemetry data.  It's also broadcast to the IP addresses of any devices connected to the RPI2's wifi access point.  So actually we should be able to connect as many devices as we want to the RPI2 and they should all get telemetry data.  This broadcast feature was added today by Tridge just for us.

    The video can be seen in [Beta] Tower by following the instuction here and then running this script on the RPI2.  The video is not being split though so when you're doing this, the balloon-finder can't use the camera.  Also the video is always sent to the first device to connect to the RPI2.  So I'll need to look back at the tee stuff Patrick's done to see if that can be incorporated.

    The video shown in Tower is quite laggy but hopefully we can work on that.

    • That's great that you got beta tower going. I tried a while ago with UDP video, but couldn't connect. I will try again.

    • This is the RP3 running openCV  C++ using a USB  MJPG camera at 640x480 fitted withh and a fisheye lens

      The code is optimized to reduce false positives on the Histogram and the  cvHoughCircles..

      Speed: 7.6 Fps

      3702621813?profile=original

      Comments are welcome :-)

    • Hi Patrick!!

      I finally had some time to make some tests using the rpi 3 as well... And I decided to lower the resolution. Allow me to explain why this might work on my application...

      My objective is a bit different than yours, but quite similar. I will make a vehicle hover over this roomba robot and then the roomba robot will start moving, and the quadcopter will track the movements of the roomba. Kinda like the idea of a missile but with no destruction (hopefully...) and never approach on Z... hehehe

      My color finding algorithm (https://github.com/alduxvm/rpi-opencv) is working at 8.3hz, when running at VGA resolution, just 1hz faster than yours :P

      Doing a position controller, or hover controller, gets very problematic when having an update rate of less than 10hz for the feedback sensors... Is doable but not recommended. so, our 7-8hz running at 640x480 will not be enough. So, if we reduce the resolution to 320x240, we can have stable 20hz, take a look here: https://www.youtube.com/watch?v=pu_9DGT2qO0 

      But there is a problem with lowering resolution... is harder to get a signature. In my case, I want to follow the roomba, and this two (roomba and multirotor) will be apart 1 meter, and after doing tests... the rpi is able to properly "see" the roomba!! :D

      But maybe for the ballon finder, the signature will be to low that is not going to be able to find it :( (I recommend to test!).

      So, next step is to do the tee stuff to send video to ground station and be able to see what the vehicle sees... I have done this in the past: https://altax.net/blog/low-latency-raspberry-pi-video-transmission/ with this approach is even possible to fly FPV small racers!!

      Ok, so, raspvid to a fifo file, then netcat to send it to a computer... the problem is doing the computer vision... I cannot find yet a solution to read a fifo file using opencv and python... Maybe someone has??? 

      If we are able to read a fifo file on python opencv, then we can have a command on the background reading the camera with raspivid and teeing to netcat and python... Has anyone got closer with this?? 

      alduxvm/rpi-opencv
      openCV + Python tests using raspberry pi, camera module and usb cameras - alduxvm/rpi-opencv
    • Hola Aldo

      Good to have an update on your side.
      Color histograms are quite fast and can do impressive stuff on a controlled lighting environment. Chasing the roomba is agreat project and it makes it easier considering you work x-y and leave z as a constant. Then you can add the heiht control by getting the controler to keep the roomba size (blob , houghcircle or else) at a specific size

      Cpncering the TEE , you can use v4l2loopback that works with gstreamer under gstreamer release 0.10 you can take at my blog balloon finder in the companion computer forum

      Best regards
    • Developer

      Looking good.  The 7.6FPS is better than I'm getting with the balloon-popper on the RPI2.

      I had very bad luck with the cvHoughCircles though when I used it originally. It seemed to be easily thrown off by the sun hitting the balloon (which makes part of the balloon appear white instead of red).

    • Yep,

      Some additional info:

      There is no major gain with the RP3 because:

      - The OS is the same , so its not using the 64 bit core

      - The overclocking option is disabled... We can overclock the RPI2 without problem

      -The GPU driver is the same, so no real optimisation

      -The CPU is running at 27%, so one core fully loaded, I might get additionnal fps gain by multithreading (next lab)

      Did some test with UDP and it is quite interesting, I really think that  we should split the Image Process from the Controller. This way , we can substitute the  devices and/or the  programs, as an example, it is possible to interface a RPZero with a Pixy Cam and get pretty good results !!  I really need your feedback on this.

    • Developer

      Ok, so the red-balloon-popper should be using the python "processing" features which should end up with it running on multiple cores.  That was working on Odroid and I had assumed it was working on the RPI2 but I haven't specifically looked at "top" to make sure all cores were being used.

      In the red-balloon-popper code the image processing is somewhat split from the controls.  So the find_balloon.py script's analyse_frame() function does most of the OpenCV work while balloon_strategy.py is the "main" program and also implements the controls (i.e. search_for_balloon and move_to_balloon functions).  It could be divided up more though for sure and perhaps you're referring to separating them into separate processes?

      The PixyCam is really good and super fast.  The IRLock sensor is really a PixyCam so we already have a driver for it in Ardupilot.  It could certainly be used to implement a red-balloon-finder but it seems to me it's not a very general solution.  It can find a single type of object but it doesn't capture the video and modifying the algorithm would mean diving into the PixyCam code which I'm pretty sure is quite different from programming a companion computer running Ubuntu (or whatever).  Also the next big step seems to be "Deep Learning" (whatever that means) which I think is going to require a really beefy companion computer (like the nVidia).

      rmackay9/ardupilot-balloon-finder
      Code meant to be run on an Odroid to allow an ArduCopter Pixhawk based multicopter to find red balloons for Sparkfun's AVC 2014 competition - rmackay…
    • Randy,

      Bear in mind that i am NOT working on Python at this stage it is in C++.  The python multithreading seems to wok with the RPI but the find_balloon.py is not fast enough ... as you know. So, I am trying to speed it up by implementing find_balloon in C++ but to do so, I am looking to a simple and elegant way to get the 2 environment work together on multiple platform and the only way I found logical is by using UDP. Otherwise you have to work with shared memory , shared pipes, shared files, and if anyone can show me the benefit over UDP, please be my guest.

      This way we can "publish" the followings, as specific private UDP or as a  -pose topic under ROS  (like the Bebop drone) or a MavLink navigation command, (or else):  

      balloon_found
      balloon_x
      balloon_y
      balloon_radius

      So, the video process would integrate: colour_finder.py + find_balloon.py + balloon_video.py + balloon_utils.py and the associated configurations file (width, height, lens(FOV), color filter ,balloon diameter, etc.). Technically I would generate a vector, corresponding to a  lens compensated projection with an estimated distance based on a known balloon diameter.

      There is a requirement to add a kalman filter so that  the  vector and the distance are stable enven if the detection algorithm  randomly  jumps around the object.

      Yes agree with the ''Deep Learning'' stuff, the DARPA  truck on the Nvidia video is quite impressive. At the rate i am ordering computers these days, I could certainly get a brand new TX1 within a month :-) ... I am waiting for release 2 of Jurgen's  baseboard ... anyway the v4l2 driver is not released yet...here's Dusty message on the developers forum:

      Sorry, the update was pushed out to include critical kernel, DVFS and perf fixes. Current ETA is late Feb/March.

      ...so for the moment we would  end up with a "blind"  20 watts  number cruncher!!! 

This reply was deleted.