Amazing project from Hackster.io:
ABOUT THIS PROJECT
Voice Controlled Quadcopter Drone
With Amazon Echo, AWS IoT, and 3DR IRIS+
Materials (Shopping List)
- Raspberry Pi with Raspbian, power supply, wifi adapter, case, etc
- 3DR IRIS+ (915 Mhz Radio version for U.S.). Also would work with any Arducopter or Pixhawk based copter.
- Amazon Echo
Setting up AWS IoT
- Log in to AWS Console: https://aws.amazon.com/
- Navigate to AWS IoT
- Create a “Thing”. First, from AWS IOT page, create a resource.
- Now, “Create a Thing”. For this demo I am calling mine “PiGroundStation01”.
- View your thing. Note the REST API Endpoint on the right panel = looks like: “https://some-random-string.iot.us-east-1.amazonaws.com/things/DroneControl/shadow” you will need this later. Also note the MQTT topic. You will need this later too, when we are editing the python code for the ground station.
- Next, we need to create a Certificate so that your device can authenticate to AWS IoT. Click “Create a Certificate” and use “1 Click Create” (Steps 1 & 2 in the image below). Download the public key, private key, and certificate. You will need to put these on the Raspberry Pi later. Put them somewhere safe because Amazon will not issue them again, you would have to start the process over and create a new certificate.
- Then (steps 3 & 4 in the image above) select the certificate with the checkbox, and under Actions menu, Activate the certificate.
- Now, with that certificate still selected, copy the name of the “thing” you just created, you will need this on the next step. Now, go back to Actions menu, and select “Attach a thing”.
- Now, on the popup layer / form, paste or type the name of your thing and click Attach.
- You can now see that the certificate detail shows your thing to be attached.
- Best Practice: Create two sets of certificates, one for the Lambda function and another for the python ground control. This is a better security practice and also it will help troubleshooting as you can see in AWS IoT Logs which connection is doing what. Don’t forget to attach both Certificates to your Thing, using the same process described above.
- Now, we have to Create a Policy so that clients using the certificate will be allowed to connect to the endpoint. We’re going to create a very wide open (wildcard permissive) policy, to make this easy. In production environment you will want to create a policy that is narrowly tailored to achieve just what you want to achieve and nothing more. Click “Create a resource”:
- Create a Policy and fill it out like the below (you can Name it whatever you want). Click “Add Statement” when you are done.
- Now, you should be able to see Statement 1. Scroll down a bit, and click the big “Create” button
- Look up on the top right panel, and you should be able to see the details of the policy you just created.
Your thing is ready for use, let’s move on to other parts of the setup.
Set up Raspberry Pi Custom Ground Station
- This tutorial assumes you already have a functional Raspberry Pi, loaded with latest Raspbian distribution. There are many tutorials online how to do this.
- Assemble the Raspberry Pi and include the USB wifi dongle.
- Connect the USB radio antenna from IRIS into a free USB port on the Raspberry Pi. You should not need to install any special software.
- Connect (with keyboard and monitor) to Raspberry Pi and enable SSH.
- On Raspberry Pi, Run the following commands:
cd ~ sudo apt-get install avahi-daemon sudo chown pi /usr/local/ -R sudo apt-get install nodejs npm -y sudo apt-get install zip sudo apt-get install screen pip install awscli pip install droneapi pip install paho-mqtt git clone https://firstname.lastname@example.org/veggiebenz/echodronecontrol.git dronecontrol cd dronecontrol mkdir certs
If you have any trouble with any of those commands working (as in, there are errors in the command output), try the following and then pick up where you left off in the first code block above. You would need to retry any command that did not complete successfully.
#run these only if there were issues with the previous set of commands sudo apt-get remove python-pip easy-install pip hash -r
To validate that your python environment is now set up correctly, go into python command line interpreter and issue the commands “
import droneapi” and then “
import paho.mqtt”. If these do not error out, you are OK. Enter “exit()” on a new line to leave the interpreter.
- Using an SFTP client (I am using Cyberduck for OSX), connect to Raspberry Pi. Click “Open connection” on top left, and fill out the information for your Raspberry Pi, and click Connect.
- In the SFTP client, navigate to /home/pi/dronecontrol/certs/ and upload the 3 certificate files from AWS IoT to the Raspberry Pi. Move these into dronecontrol/groundstation/awsCerts/. Take note of the file names because they will likely be different than what you see above, and you’ll need to put the file names into some of the program files later in the process.
Editing files on Raspberry Pi
There are quite a few options for editing files on Raspberry Pi. There are also people with very strong opinions on which is the best. Keeping it simple, you can use “nano” to do simple editing in place on the Pi. If you are going to do serious python coding on Raspberry Pi, you could go for a heavyweight but very functional solution like PyDev with Eclipse. PyDev lets you remotely mount the Pi filesystem so you can edit files in place and test very easily. Or you could mount the Pi drive remotely and use something like Atom from your workstation.
Setting up your Lambda function
- Select “Lambda”
- Click on “Create a Lambda function”
- Go down to the bottom of the page, and “Skip” selecting a blueprint.
- Name your lambda function it “EchoDroneControl”. This is important because this name will have to match with a directory name on RPi for the upload script to work correctly. Leave runtime as Node.js, and select “Edit Code Inline” - we will actually upload our code using a script later. In the code block, just put a comment like:
// nothing here yet
- Use Basic Execution Role, it will pop up a new window and suggest you create a new IAM role, it may suggest a name like lambda_basic_execution - this is fine. It doesn’t matter as long as you remember it as you will need the ROLE’s ARN later. You will have to enable Popups (disable popup blocker) to get to the Lambda Role page.
- Click “Allow” on bottom of Lambda Role page.
- Back to Create Function page, Leave other defaults except Timeout, increase to 10 seconds. Then click “Next”.
- “Create Function”.
- Once your function page comes up, note your Lambda function’s ARN (Amazon Resource Name) on the top right. It will look like - arn:aws:lambda:us-east- etc etc etc… Copy this, because you will need to supply this to Alexa Skill configuration in a later step.
- Select “Event Sources” tab and “Add event source”. Select Amazon Skills Kit, and save.
- Go to your IAM (Identity & Access Management) Roles page:https://console.aws.amazon.com/iam/home?region=us-east-1#roles
- Click on the Role you created a few steps ago- it’s probably lambda_basic_execution unless you entered a different value. Click it.
- Have a look at the top right of the screen, copy the Role ARN value.
- On Raspberry Pi, navigate to ~/dronecontrol/EchoDroneControl/ -- these are the files you will need for Lambda, and for Alexa Skills Kit (ASK).
- Edit “upload.sh” and paste the Role ARN value into the variable on Line 3. Without this you will not be able to upload the source code to Lambda.
- Edit config.js file and make some changes. Put your Host, Topic, Alexa Skill App ID (you will get this in the next section).
- Find the set of certificates that you created for Lambda, open up the certificate files with a text editor. You will need to place these into strings inside the “config.js” file. Looking at the config.js file, this should be self-explanatory. The reason we have to do this is because Lambda doesn’t like reading files from the file system, so we have to pass the certificates as Buffers into the IoT device initialization routine.
Setting up Alexa Voice Skill
- Go to https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit to set up an account and create your skill.
- Click “Apps & Services.
- Next, select “Alexa”.
- Click “Get Started >” under Alexa Skills Kit
- Use name = “Drone Control”, invocation name = “Drone” and for Endpoint, select Lambda ARN and use the Lambda Function ARN value you got in the prior step (not the ROLE ARN).
- Save these values, and next page you will see entry box for Intent Schema and Sample Utterances. We will get these from our GIT project.
- On RPi, navigate to ~/dronecontrol/EchoDroneControl/speechAssets/ -- these are the files you will need. Open up IntentSchema.json and paste the values into the Intent Schema box on the Amazon web page. Likewise for the SampleUtterances.txt, put the content in the corresponding text box on the web site, and select “Next”.
- You won’t be able to test this yet, so scoot down to the bottom of the page and click “Next”.
- On the next page there are some mandatory fields, so let’s fill them out.
- Example Phrases: “Alexa talk to Drone”, “Command Launch”, “Go forward 10 feet”
- It’s not necessary to submit for certification as right now it’s only you who will be using it. Be sure to save your entries though.
- Go back to the Skills Information on your Alexa skill, and now you should see an Application ID. This is what you will need to place in your lambda “config.js” page in the value for
app_id. The point of doing this is that someone else’s Alexa skill can’t execute your lambda function. The JS code will stop executing if it’s not coming from your Alexa skill.
- Let’s go back to the Raspberry Pi and finish setting up lambda. Then we can come back to Alexa Skills Kit site and TEST our Alexa to Lambda integration.
Uploading and testing your Lambda function
- Before you can upload your file, you’ll have to do a couple of things. First,install AWS Command Line tools.
- You’ll have to run “aws configure” and set up some access parameters.
- Now that your config.js file is updated with the correct values, you can execute upload.sh to push the file to AWS.
cd ~/dronecontrol/EchoDroneControl ./upload.sh
Go back to your Alexa Skill, and click “Test” on the left nav.
- Go down to “Enter Utterance” and type “Tell drone Command launch” and view the response from Lambda. You should see “Executing command launch” somewhere in the Lambda response. This means Alexa skill is talking successfully to Lambda.
- You should also be able to go into AWS IoT Logs and see the MQTT Connection and Publish successful messages.
- Now let’s get the groundstation working.
Customizing the code for your Python Ground Station on Raspberry Pi
- On RPi, navigate to ~/DroneControl/groundstation/
- Edit echodronectl.py using your favorite python editor.
- You need to change the MQTT endpoint name, and the topic name.
- You will also need to be sure that the certificate names in the script match the names of your certificates in ~/dronecontrol/certs/
- You will have to edit a few variables at the top of the python file: cert_path, host, topic, and the certificate file names.
- Run the following commands from Raspberry Pi console, this will put two commands into an initialization script file that runs when MavProxy runs.
echo "module load droneapi.module.api" >> ~/.mavinit.scr echo "api start ~/dronecontrol/groundstation/echodronectl.py" >> ~/.mavinit.scr
If the script complains about file not found, you might need to edit ./mavinit.scr and use absolute paths instead of relative paths.
Up, up, and away! (Well, hopefully not "away"...)
Tip: If you are away from home, you can set up your mobile phone’s Hotspot with the same SSID and password as your home wifi network and your devices will not notice the difference.
- You need to be familiar with controlling your aircraft using the manual controls before you try some automated method like this. Also you should be very familiar with how to execute “RTL” sequence and know what to expect. Be very familiar with taking manual control of the craft when things go wrong.
- Drones are inherently dangerous and could cause great harm, so do not operate near anybody or anything.
- Place your IRIS+ on the ground.
- Power up the drone, also power up the RC controller.
- Alexa is powered up and is connected to your wifi network.
- Pi Groundstation is powered up and is connected to your wifi network. Also, the 3DR 915 Mhz radio is plugged into a USB port.
- Pi Groundstation is running “mavproxy” program - you can set this to auto start or you can just ssh into the Pi and run it from there. Put the following into ~/mavproxy.sh, and add ~/mavproxy.sh to /etc/rc.local/ for auto start:
"#!/bin/bash #start a screen session for mavproxy but detach from it screen -dm -S mavproxy.py #read -p "mavproxy launched in background. Use screen -r mavproxy.py to attach" -n1 -s -t3"
- You will have to ARM the drone manually. This could be automated but has been left in place as a manual step purposely.
- Say “Alexa, talk to DRONE”. You should hear “Welcome to Drone Control”.
- Initiate your flight by speaking “COMMAND LAUNCH”
- Once the current waypoint is reached (in the case of launch it is by default about 5 meters above ground), you can issue a new command. Try commands like “GO FORWARD 10 FEET (or METERS)” or “TURN RIGHT”. You can say other things like “GO RIGHT 15 METERS”, and you can even say “TURN LEFT 45 DEGREES”.
- Once you are done with your flight, you can say either “COMMAND LAND” to land wherever the craft currently is, or you can say “COMMAND R T L” or “COMMAND RETURN TO LAUNCH”.
- When landed, use the disarm sequence on the RC controller to be sure the craft is stopped, and disable the power to the drone. Turn off your RC controller.
COMPONENTS AND SUPPLIES
Raspberry Pi 2 Model B × 1
3DR IRIS+ × 1
Alexa Amazon Echo × 1
In order to make it more simple and usable in the field, I would drop the bulky Amazon Echo and rather have a smartphone instead
It is much easier to install and run just type: apt-get install goget-beer
Just needs to implement some control via the faucet in the kitchen sink to be complete. ;-)
@Martin, yes indeed. As we say in french: what a gas factory!
The system architecture has a bit too many "boxes" for my taste. Sounds also terribly complicated to implement. Before my inner eye I can easily see a similar feature for my system flightzoomer: based on phone speech recognition (= local, not in the cloud), costing less (overall, including hardware), ready to use, feature rich.