To create a valid SBC Linux Image we need to know what software that would be useful to a new comer to have pre-installed.
Statring with Ubuntu 14.04 LTS for ODROID /RPi2
Additional Software would be
System Software
- FTDI USB support (drivers are built in)
- Python
- WiFi Support (As Client and Access Point)
- DHCP and DNS Support
- LTE Module Support (needs list of compatible devices)
- OpenCV + Python Bindings
- OpenVPN support for secure connections
- gstreamer-1.0
Drone Software
- MAVProxy
- DroneAPI
Python Applications
- Randy's Balloon Finder
- Drone API examples
Bootup to start MAV Proxy and other Python software in know configurations
[LATER] Also
- Inadyn
- dnsutils
- modeswitch
- proftpd
- VLC (Some people prefer VLC over gStreamer)
- Uqmi ? (True LTE support. Widely used by OpenWrt)
Replies
Does "sudo apt-get install python-opencv" work on RPI2? Darn, I'd like that 5hrs of my life back. It makes sense that it would work, I couldn't believe it was as difficult as it seemed (based on this blog post) considering how widely used RPI2 and OpenCV are.
You know.. I may recreate the image just using the simple install command. txs for this.
Come on Randy ... Don't tell me you spent 5 hours looking at the compile :-))
Pyimagesearch is great stuff !! It includes all the optional packages and it is quite extensive for development and openCV training but not necessary for this project.
If you recreate the image please use the latest raspbian (feb 9)
because it has already the mesa drivers installed
Please add the following:
all the required path in the .bashrc (in /home/pi)
chmod +x of all scripts *.py
sudo apt-get install mplayer2
to playback the video sequences
I will submit the additional changes on https://github.com/diydrones/companion/blob/master/RPI2/Raspbian/se...
QGC builds and mostly runs on RPi2, once you manage to get Qt5 built. I'm working on a build script, but the basics are at
https://github.com/mavlink/qgroundcontrol/pull/2554
Strongswan 5.3.5 with vici python bindings enabled.
Useful for nat traversal, encryption and maintaining links with roaming interfaces (e.g. 3G) via MOBIKE.
It will only add to the latency issues for video, but useful for command and control.
I've made a new version of the RPI2/Raspian image and also made some small updates to the setup scripts.
The improvements are:
In the companion repo I've also added a new Client/Ubuntu directory with a single script to allow watching the video.
I tested this first attempts video latency and it was terrible (0.37 seconds) so there's certainly some work still to do.
Next I plan to add dronekit and opencv. We will need a way to have the video be available both to the pilot on the ground and dronekit/opencv on the vehicle. There are some web pages out there to do this but if anyone has any advice on this..
Hello Randy,
I will give it a try.
Concerning the Access Point Wiki, I would eplace this:
This has only been tested using the WiFi dongle sold by Adafruit. Check this with Craig
With this:
The chipset of the WIFI dongle have to be AP capable: https://wireless.wiki.kernel.org/en/users/drivers
At a second thought, the RPI wireless gateway that you describes in the Wiki might be useless with the balloon_finder companion because we need to stream the video over WIFI on 5Ghz directly to the ground control station. Might be important at this stage that we agree on a common architecture for the balloon-finder companion. Would you agree that I submit a system diagram to make sure that we are in tune?
@Patrick
For the OBC we also once had to find out where the best location for the AP point is, either on the ground or in the air on the drone. In the air means that the drone can directly connect to different users that are far apart on the ground. It also means that users can be swapped around on the ground without to many problems, and allows for different systems to used easily, and even at the same time. This is probably the best setup for LOS ops.
The downside is that ground users who want to connect to eachother (for whatever reason) reduce the capacity of the UAV to get data to the ground. In the end we decided to have two wifi networks, one in the air that brought data to the ground to a (companion) computer from where the ground wifi network then distributed the wifi for the individual ground users. You can of course use separate channels or 2.4/5.8Ghz networks to get the required performance, or even directly connect to the UAV AP should there only be one user or failure of the ground wifi companion.
I suppose my point is that depending on the application it would be helpful that the settings as client or AP are simple to change and configure so that the various options can be accommodated. To get the most out of the companion image I still think a CLI/web interface will be required and including the wifi setup (I use wicd-curses because I'm lazy) into that would probably be a good idea.
Ideally something like Tower or MP could even have a sub-menu for it eventually.
Regards
@JB
Depends on the system topology, if we want to go bare-bone, the AP solution is recommended because you can connect any PC or Tablet just as you would with any access point. The other important feature to consider is Latency. Getting the image captured, processed, transmitted-received and displayed with the least delay possible is easier to control if the OBC has full control over the capture-process-transmit part of the system.
RPI native images offers most of the GUI you require to makes thing simpler. But my guess is that in a development stage, it might be more efficient to KIS and add GUI and features at a latter stage.
Hi Patrick
Sorry I should be more careful what abbreviations I use where n the forum. I actually meant OBC as in Outback Challenge, not Onboard Computer.
There is no question that the SBC (single board computer) needs to be on the aircraft for imaging and processing purposes. For a single ground user, which is likely to be 90% of the typical cases, this arrangement works well with the AP in the air as well.
My comments where intended to demonstrate that there are some use cases, where the SBC image being created here, might also want to be run on the ground as well, to share mavlink and imaging with various ground users, without affecting the downlink transfer from the UAV. It could also be used to control an antenna tracker, or as a GCS (using FPV goggles/monitor) etc as well.
I agree that GUI component can be done later, I mentioned it again because I like designing from the user backwards: 1st user's overall goal, 2. user control, 3. actual mechanics. My point being that you could have all the functionality in the world locked away in the command line, but the most likely way these will be used is if the user can easily access and decide which one they need for a task, without the need for typing help or getting the manual. It also assits in situational awareness and confidence in the use of the system. I suppose it's a bit like having a hammer but not being able to find the handle. For some! ;-)
On the subject of latency I think this depends on the use case as well. From what I can discern there are four main use cases for this SBC image (listed in order of latency):
Both 1,2 and 3 are "video" based systems and therefore both require a low latency setup, the 4th, for which Tridges image recognition etc is useful, is not so time crucial. The SBC will simply sit there and crunch the images coming in and there will be a delay on the output side, sometimes seconds and sometimes minutes that isn't really a problem for that type of mission.
So maybe to "facilitate" the different use cases we could have different SBC modes for each. I'm sure there's some overlap between them, but we need to start somewhere, and the services running in the background, like wifi, mavlink etc, should be able to support that.
Anyways that's my two cents.
Regards