Work domain analysis and cognitive systems

I used to work as humble research engineer/scientific developer for a research lab in the Netherlands. It's how I got into this business eventually. In that environment I also got introduced into the various topics related to cognitive science, interface design, cognitive workload, work domain analysis and how this work typically generates complicated technological requirements that usually require new algorithms or functions, leading to systems that look and feel different than if they were designed by an engineer looking at the technical features of a device.

One text that circulated there is still a very interesting read. It presents a view on drones, their control systems and the way how you interact with them from a different perspective. The usual perspective of (mostly) technical engineers on this site is how the functions map to a hardware architecture and/or how the software is modularized. You then compartmentalize specific functions to those areas and you present the concept.

This post serves to present this alternative, more cognitive perspective. We first have to realize that the overall system is not limited to the drone and the ground control station. The system's boundaries include the operator, pilot, engineer and any other person involved and should also include how well they are trained to communicate together.

An interesting paper to read on this subject is this one, produced by a colleague working in that lab:

Key issues in this paper is this graph, which demonstrates how you can consider "flight" from the perspective of cognitive systems. The technique used to derive this graph is called "Work Domain Analysis" and it's more of a cognitive analysis of the work at hand then a design that attempts to identify and locate where processes go in an architecture.

Because we look at this work domain from a cognitive systems perspective, it applies to drones, airplanes, manual flight, automated flight, etc., because the technique allows you to swap out human cognition with automated systems and vice-versa. So the only thing that really changes is whether a machine is doing it or a human. This makes it easier to figure out what the tasks are of this automated system and it also allows you what skills are needed to make sense of the input and output of these automated systems, so you can identify what the better user interfaces are, or figure out in which cases they are more applicable. In other words... "work" here doesn't mean energy, but cognitive functions that either a human being or an automated system may provide.

So let's apply the graph above to a pilot in a cessna...

1. At the lowest level of "flight" you have the physical characteristics of the airplane. This analyzes the available sensors, the fuel capacity, the surface control, motor, propeller, energy consumption to maintain flight, etc. The pilot in this case has an awareness of the physical limitations of the airplane, which is important at the higher levels for mission planning and during piloting the flight envelope, etc. Basically, this lowest level provides the capability of flight, but only when knowledge or control systems are applied will the airplane become airborne.
2. So the next level is "flight control". Here we consider issues that are of immediate concerns when it comes to staying in the air, efficiency, reliability, staying well inside the flight envelope, wind estimation, gusts, etc. Basically, you could say that the pilot can now effectively fly, but he doesn't know yet to go anywhere useful. The lower level of flight has a high impact in terms of how they define the constraints at this level.
3. The "Aviation" layer is located here, which is an intermediate layer not yet generally implemented on hardware. This aviation layer is about flying safely considering environmental constraints like buildings, church towers, trees. During planning you don't generally have information available at this required density, so the aviation layer is the pilot slightly deviating from a navigation plan to meet those constraints.
4. The "navigate" layer is where you consider the cognitive functions necessary for mission planning. You look at no-fly zones, flight altitudes and start planning your flight through the air. It's possible to do this without a map if you have a cognitive map of the area and know where things are located in space. The 'navigate' layer is about planning where to go and tracking that it does indeed happen that way.
5. The "mission" layer is where you determine what needs to be done. In this case it's the cessna pilot talking to his client where he wants to go.
6. The "joint mission" layer is where you have different pilots executing two different missions and coordinating how they work together to achieve those goals.

What are some interesting observations after this analysis?

When the cessna is in a direct emergency situation, the "navigate" and "mission" layers become much less important. The flight control and aviate layers suddenly become the only active layers in this system, because the pilot is only concerned with putting the craft down anywhere acceptable. His immediate "planning horizon" is therefore significantly different. So this is an example how this analysis helps to serve cognitive requirements for ground control stations. If you receive through telemetry that a craft is about to go down, you can adjust the user interface to better cope with that situation. The mission that was planned has become totally obsolete and you want instead to switch to a camera view if you have one, have better control over the last position and allow different control mechanisms for the operator (auto switch to manual?) to get better control over the situation. In other cases, you may want to switch all your algorithms and automate the process of emergency landing. (the easiest way out is to deploy a parachute, but you may want to determine the correct location to do this first).

How this also helps is to figure out what can be automated, what kind of impact this has on the situational awareness of the operator, or what kind of interfaces you should provide to make this automation effective. A very large factor that determines the effectiveness of automation is the ability of the operator to understand the reasoning process of the computation device and to interpret the results of that process. So if you develop a computer where you perform calculations that have actions in the real world, but then only put a LED on top that starts to blink when the computation is complete, you make a system that makes them nervous. The challenge here is to visualize the results or elaboration.

The idea is that you try to bother the operator with as little detail as possible, so you must find abstractions for complicated planning elements and insert higher level handles and tools for the operator to be able to influence that planning process. Another discussion that this work provokes is a discussion where to execute this automation process. Usually the network link we have with a drone is pretty limited in how much data can be transferred, so this limitation in bandwidth constraints how much of the reasoning process can be displayed to the user. As lessons in big data dictate, rather than trying to move all the data to where the computation happens, you need to bring the computation to the data instead. There are reasons why you wouldn't want a uav to be quite as autonomous as you can make it, because for reasons other than autonomous reasoning you may want to have finer control and insight to this planning insight by the operator.

If you want to know more about how these techniques help you to analyze how a drone fits into your organization, I recommend going to the site of Gavan Lintern: ; . He provides some tutorials after you register and they demonstrate how relatively easy it is to apply them. What you find out after you apply them is new insights into what kind of knowledge and instinct really is involved with the task at hand. Primarily, the objective is to figure out what the concerns are that people have during control tasks. Humans are pretty clever, so when you present them with a task they typically figure out instinctively what the constraints and affordances related to that task and they learn to optimize towards those in a few cycles. This means they develop, over time, a particular skill into how fast or slow they can fly, how it feels before you get into a stall, etc. ( this happens to be at the flight level ). When you analyze these carefully, you usually find great opportunities for innovation or improved algorithms.

Figure out what the concerns are of people that have a lot of experience in one area and design your automated solutions around those!

Views: 411


You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2017   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service