Let’s say — hypothetically — your VP of Drone Fleet Operations just asked you to help her handle drone management, route planning, payload optimization and more. What do you do? Well, there’s a few approaches to tackling the problem. Approach #1 is all about controlling drones using the Salesforce1 Mobile app. That’s what I’m going to talk about today. Note that all of this is done with a free Developer Edition and a little code.
Although I won’t cover it here, there’s also a mildly entertaining yet entirely impractical YouTube artifact documenting my adventures at ThingMonk where together with the excellent Darach Ennis we were able to launch a quadcopter using a coffeepot.
Equipment & Architecture
Let’s start by looking at the equipment you’ll need. The first thing is a quadcopter or two. I used the Parrot AR Drone 2.0 available at pretty much every retailer worth their weight in salt. The Parrot is great for a lot of reasons, but first and foremost is that it has a great API. Where you have an API, great things are possible, right? Now, the Parrot is also a toy, so you production minded folks will probably want to upgrade to something more robust.
The way the AR Drone works out of the box is that it creates a WiFi hotspot. You then connect your controlling device to that AR Drone hotspot and operate it. Parrot sets you up with an app that runs on either an iOS or Android device. I’ve controlled them from both platforms and it works great. The default AR Drone configuration requires one controller per drone, and it requires that controller to be on the WiFi network provided by the drone. If you have two drones, they are isolated from each other by their network connections and there’s no interaction.
In order for this to work with Salesforce, and in order to control multiple drones at the same time, we have to somehow unify these devices, which using the out of the box configuration means the controller needs to bridge multiple networks. My goto local interface box is typically the Raspberry Pi, and, fortunately, the Raspberry Pi is capable of supporting multiple network interfaces, which means it can also handle multiple network addresses. There are a few ways you could configure this, but I chose to use a single Raspberry Pi as a bridge between Salesforce and two other Raspberry Pi’s which connect to the AR Drones. It looks a little like this:
Control via the Streaming API Pattern
My first approach on this project is to use the familiar Streaming API Pattern. (SeeControlling Philips Hue from Salesforce1 for another example, or get started with areally simple streaming API example.) The Drone Gateway connects to Salesforce, listens to a Streaming API Push Topic and then forwards those instructions to a device as soon as they’re received.
On the Salesforce side of the house, we have to create a simple way to generate listenable data. This is easier than it sounds. The first thing we want is an sObject to store the data in. I’m re-using an existing object pattern I’ve used for other message driven work, I call it “Drone Message.” The two key pieces of data it stores are the Address and Message. You can see in the screen capture that this one is setting the “land” message to “D2″.
You can of course use the default UI to create the records, but that then requires you to know addresses and message codes. Since code handles those kind of details better than my brain does, I created a simple Apex class to create these records.
And now all I need is a little Visualforce code to extend this out to the UI layer. Note that this Visualforce page is tied to the Apex code above using the controller attribute.
Now you need to make this Visualforce page available for mobile apps, create a tab for it and finally customize your mobile navigation options. These are all just a few clicks, so check out the links if you’ve never done it before — pretty easy. Out in the world of humans, this renders as a very simple page, the same one that you saw in the video clip above.
Now that we have a quick and easy way to create listenable messages, let’s take a quick look at the Drone Gateway that’s doing this listening. This is a pattern I’ve re-used a few times, so you might be familiar with it. The gateway authenticates, begins listening to a Streaming API Push Topic, and then handles whatever it receives. I chose to write this in Node.js and the code is pretty simple. The connect to Salesforce is detailed in the Philips Hue article, so I’ll just show you how it handles the message. Note the “address” and “message” arguments.
Now, you will have no doubt noticed that the above code is doing nothing more than making a call to a webserver. When I was testing, I decided that an http based interface would also be fun, so I created a small server that simply responds to two URLs: start and stop. You can see that these map to the CylonJS commands for “takeoff” and “land”.
And there you have it. The start to finish message flow now looks like this:
- User presses takeoff on their mobile device.
- Salesforce1 inserts a Drone Message object for takeoff.
- Streaming API picks up the new records, forwards to listeners.
- The Node.js based Drone Gateway catches the new record, and sends it to the right address.
- The Node.js based Drone Server sends the specific command to the AR Drone.
Code notes and links:
- Visualforce and Apex is above, everything else is a minor configuration.
- Drone Gateway code
- Drone Server code
My command center for the video shoot looks a bit more complicated, but it follows the diagrams above. Note the three Raspberry Pi’s and two network hubs on the lower left.
As you can see from the video, it’s pretty easy to get the drones to follow some simple instructions. The primary challenge with this method is the inherent lag between when an instruction is issued and when it gets to the drone. This lag depends on a huge number of factors — Internet connection, gateway device performance, Streaming API performance, etc — but the end result is the same. A drone moving at 5-6 meters per second will be in a completely different place by the time it responds to a delayed command.
An interesting experiment that raises a lot of questions for me. First and foremost, what is the best way to spread device intelligence out among the components of a system? Which is to say, what kind of work should Salesforce play in this kind of complicated interaction? My overall feeling is that this, while interesting, is lower level than truly practical today.