A: They're both examples of "multi-agent AI scripting". MIT postdoc Frans Oliehoek has been developing some powerful methods for swarms of autonomous objects (software or hardware, like UAVs) to communicate with each other and distribute tasks by following simple policies. One of the simulations his colleagues have tested this on is the videogame Starcraft, as shown in the demo above.
Here's his code library, the "Multi-Agent Decision Process Toolbox", which may look a little intimidating. But the MIT press office has also written it up well. Here's an excerpt:
In a series of papers presented at the International Conference on Autonomous Agents and Multiagent Systems, Oliehoek and colleagues at several other universities have described a variety of ways to reduce the scale of the policy-calculation problem. “What you want to do is try and decompose the whole big problem into a set of smaller problems that are connected,” Oliehoek says. “We now have some methods that seem to work quite well in practice.”
The key is to identify cases in which structural features of the problem mean that certain combinations of policies don’t need to be evaluated separately. Suppose, for instance, that the goal is to find policies to prevent autonomous helicopters from colliding with each other while investigating a fire. It could be that after certain sequences of events, there’s some possibility of helicopter A hitting helicopter B, and of helicopter B hitting helicopter C, but no chance of helicopter A hitting helicopter C. So preventing A from colliding with C doesn’t have to factor in to the calculation of the optimal policy. In other cases, it’s possible to lump histories together: Different histories can still point to the same result for the same action.
Comments
http://www.eecs.harvard.edu/ssr/projects/progSA/kilobot.html