I have gotten a lot of time in on 2.0.49 and it is clearly the best yet. My quad (mostly stock, sonar / compass working perfectly, loiter P = .15) is incredibly fun to fly. Sonar terrain following, simple mode, loitering are working like never before. Missions are the last great beta territory.
I have attempted missions maybe a dozen times in the last couple of versions with only one perfect, some almost right, and some wildly wrong. Yesterday, I went to a large, rocky terrain site and ran the following-
Here is what happened -
It began flawlessly, takeoff, loiter altitude great. It hit the next waypoints smoothly and right on target. It then went to 3 meters and headed toward the last loiter point. I watched in amazement, somehow convinced that it would change its mind and come home, as it got farther and farther and sailed (at 3m altitude, thankfully no cars were there!) across the highway into a hill just before Interstate 70. I then repeated the mission, only this time it skipped the first loiter and flew in the general direction of wp 3 and disappeared behind a hill.
The good part of all this is that Jason has delivered us a Revolution in R/C flying! We all know loss of orientation or sight of a vehicle is usually certain doom, but all I did was flip to simple mode alt hold and with one stick watched it pop back over the hill to me, terrain follwing a few feet high. This is amazing for a poor pilot like me - watching it effortlessly glide over rocks and gulleys at a high rate of speed and having no concern for altitude or orientation is amazing.
Why did this happen? One can think of many possibilities. Maybe some conditions cause a nav calculation error. Maybe there is a bug in a command like Loiter Time. Maybe there is a rule I am missing and my command sequence is wrong. Maybe mission planner wrote the script wrong or some goofy index error is reading a bad wp value. The log is attached but of course a definitive answer is probably not possible.
The point is, though, errors can come from anywhere and we need to start worrying about error handling on a craft zooming around on its own. If we really want autonomy, we had better start obeying Asimov's Three Laws of Robotics. We really do not need bad headlines, and the highway crossing disturbed me. I am thinking that we should implement a mission 'neutral zone', maybe done when Mission planner does 'Write WP'. It would set lat/ long max/min values that, when exceeded, would abort the script and do a RTL or Land. Face it, if we are 50 meters north of the northernmost waypoint something is wrong and we had better abort before damage happens.
Running a mission you know is confined to a 'box' would cretainly lessen the heart attack risk. Is this possible?
Replies
A GeoFence embedded into every upload of waypoint parameters would be an excellent idea. It could be set at the same time one sets the waypoints and could be simply defined and modified. THIS is the kind of thing that we could point to as examples of safety protocols. Am I the only one who can envision a future where the hobbyist goes out to a field and uploads the fields' allowed flight envelope into the aircraft before takeoff? Then we could TRULY say we fly in safe areas, and if we somehow stray from that the drone executes a RTL or some other prescribed safety procedure. I would say it is not only possible, but it should be a user definable system requirement, and I would certainly like to see resources devoted to making it happen.
I think that, while staying out of bad headlines is a good goal, and something we all need to be on guard about, there simply is no substitute for creating *good* headlines. We could all stop flying tomorrow, and the public would still be bombarded with phrases like "UAV strikes", "drone attacks", and other destructive uses of remotely piloted vehicles.
But it is about time for that same public to start hearing about all the wonderful things our flying robots can do. I think we need to include search and rescue related challenges to our efforts, produce more videos of our projects, not simply for beauty's sake, but to also demonstrate beneficial uses of small flying devices, for farmers, for park rangers, to the local home owners's association (perhaps they need to have their community center roof inspected... without the risk of a guy falling off the ladder?), and every imaginative way we can dream.
As to the three laws ... well, I think your post can support my thinking. I've read all of Asimov, and Heinlein, and about half of Aristotle, and Cicero (on ethics, anyone? de finibus?) and Caesar and Herodotus too, just for good measure. But you do not have to have read any of these, not even one treaties on logic, to recognize that no robots we build today can follow the Three Laws. In your narrative, we were unable to get the robot to follow a "go here" directive, and we're not even sure why it failed to follow that simply command.
So long as we're discussing 2d silicon wafers, I suspect that the only "brain" capable of (violating? observing?) ethics is our own. But it is worth talking about, because it is us, ever single one of us, who must maintain the standards of behavior. But we know from the self-injury reports that we often cannot follow our own rules of personal safety. In spite of a dozen warnings and in full knowledge of how our quadcopters work, engineered to the last milliwatt, we still sometimes create the conditions for our own accidents.
Which brings me back to the point of this message. It is not to point out that we are not ready for a robot constitution or voting rights. It is not to talk safety or even to praise the ideas you've sprinkled in, programatic no-fly zones or virtual fences, which has a great deal of merit. It is to say that "bad news" will always exist; let's make some good new.
Most tech we all use daily have killed people, from electricity to microwave ovens, automobiles and every form of transport to refrigerators; no one finds this acceptable, but we would also not get ride of any of these things. Because they do essential things for us all, every day. Our Civilian UAVs are less certain to cause harm than any of the items I have mentioned, but there will be bad press. Let's show their value, it is not enough to prevent accidents and failures.
I think your point made is bang on, John.
And I believe there is a lot to be said about this subject. It branches out in sub domains like human error, training, software / firmware error, hardware error and so on. Using this sophisticated technology on hobby level also brings on a responsibility to be just as sophisticated about contingency readyness.
Not that it matters so much, but I spend my professional life in the cockpit of Airbus 330 and 340, airliners with a high level of automation and behind the user interface, a high degree of complexity. In an automated environment a lot of design and training efforts are invested in how to handle system degradation and human error when it occurs (nota bene: when it occurs, not if).
I encourage the honoured developers and the entire community to take active part in conversations on this subject and to take on the responsibility of constantly improving safety and reliability of our UAS activities. And staying out of bad headlines.,
There are definitely good reasons to consider failsafes of all kinds. Automatic navigation has complete reliance on a functioning GPS but it can be jammed, it could lose an accurate lock due to other reasons, or just plain fail due to a multitude of reasons. The operator/pilot of course has the responsibility to plan ahead and he or she needs to be able to make fast and safe choices when the unexpected happens. I learned my lesson when I lost the orientation of my quad and despite flying in simple mode I could not bring it back towards me... Since then, a bug in the magnetometer library has been fixed but even so, I have not exactly pushed the envelope when it comes to my piloting skills. I don't dare risk it.