Hi All!

Sorry for the short message but I am a little rushed for time at the moment.

I am a student working on an autonomous boat for the 2015 International RoboBoat Competition and we want to us a PixHawk to control our motors, move to waypoints, etc. We are looking to use the ArduRover code to do this control on the PixHawk.

We want to get a prototype in the water as soon as possible. The prototype would only be controlled through a RC so we don't have to worry about some of the higher level computer vision stuff (yet). To do this the PixHawk has to be able to talk to our motors which speak RS485 (we are getting an RS232-RS485 converter). What we were thinking of doing was putting a ROS on the PixHawk to handle this communication. 

Has anyone put ROS on the PixHawk before?

Has anyone used a PixHawk running ArduRover to control a boat?

Were should we look in the ArduRover code for its output to the motors?



Views: 2048

Reply to This

Replies to This Discussion


You might be better off using either a Raspberry Pi or a BeagleBone Black if you plan to use a ROS.


TCIII ArduRover2 Developer 


Can you say more?

We chose the PixHawk because of the ardupilot community around it and its use of built in sensors. Do you think ROS will be difficult to implement on the PixHawk and if so, why?



You might pose your question here: Drones-Discuss You will have to join the Google Group to post.

If I were doing this, I would load the ArduRover2 2.46 beta 2 code, configure the Pixhawk for skid steering, and use an Arduino Uno to convert the two motor PWM channels to whatever serial packet data stream you need to control your two ESCs.


TCIII ArduRover2 Developer


Why use an Uno and not take the RS232 from the PixHawk and use a converter? Wouldn't an off the shelf be more reliable than an Uno?


Sorry for my limited rasp pi knowledge. But whats the easiest sister board to add to take care of 8 in and 8 out pwms ? My understanding is the pi has not got enuff hardware pwm pins.


What Pixhawk outputs do you plan to use for RS232?


TCIII ArduRover2 Developer


We were planning on using the serial 4/5 or the CAN ports.



Thanks for the link! It was a great read!

Are you still working on the project?

Have you created a boat simulator? I ask because we have looked into it and it is no trivial feet.

Have you been able to get ROS working with APM:Rover? I have heard multiple suggestions but no one has said they have done it.

For the RPi: Boats in the RoboBoat competition usually have entire motherboard on-boat so we are planning to build a motherboard or take apart an old laptop. We looked into the RPi but its clock speed and RAM are too low. The ODROID that Tridgell mentioned is only $65


Since out motors speak RS485 I've been looking for a way to change the APM:Rover code to send the needed packets to the motors. Keeping TCIII suggestion of using an Uno to read the PWM output from the PixHawk and send it to the motors as a backup.

I found in libraries/AP_Motors/AP_MotorsMatrix.cpp the following lines:

// send output to each motor
    for( i=0; i/span>AP_MOTORS_MAX_NUM_MOTORS; i++ ) {
        if( motor_enabled[i] ) {
            hal.rcout->write(_motor_to_channel_map[i], motor_out[i]);
This looks like the place where the final PWM output (motor_out[i]) is sent to the motors. I was thinking of taking the PWM nanosecond information and transforming that into a number to send to the motors through RS485. You can send a percentage (between 0 and 100) which is what I was planning on doing using the min and max output PWM as 0 and 100 respectively. 
Am I right about this block being the last time motor_out[i] is edited and that motor_out[i] is the PWM up time in nanoseconds?
How did the neural net work out? Was it able to compensate for different lighting conditions? That seems to be one of the big problems with vision systems.

Does your team have a website that is still up?

Glad to hear that worked so well! We will definitely look into using a neural net. Any advice?

Also, do you have any experience in fusing computer vision with distance information (from a LiDAR or stereo vision)? Most teams in this year's RoboBoat are using such a sensor fusion.

The team talked about using a neural net and one of the obstacles that came up was  the need for sufficient training images. Did your team recreate the course to get the training images? Were you able to use video from last years competition as training images?

Reply to Discussion



Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2020   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service