Ditch humans or cooperate? Google’s DeepMind tests ultimate AI choice with game theory

February 10, 2017

DeepMind, the London-based artificial intelligence unit of Google’s parent Alphabet Inc. has been running a series of simulations aimed at answering a key AI question once and for all: will the robots play nice, or will they try and kill us all?

DeepMind’s latest research is focused on the dichotomy between cooperation and competition, specifically among reward-optimized agents (human or synthetic), in highly variable environments.


While far from deciding humanity’s fate at this point, the information gathered thus far gives us an indication of the extent to which man and machine may cooperate in the near future, on everything from transportation systems to economics.

The team is trying to expand the comfort zone of existing AI agents in a variety of ways, most recently through two distinct game types that draw heavily on elements from game theory.

In the first game, the two agents must compete to gather as many apples as possible, a straightforward premise centered around scarcity and cooperation. The more plentiful the apples, the more likely the players were to cooperate or, at least, leave the other alone.


However, there is a twist: both players are armed with a ray gun and can stun the other player at any time, immobilizing them for a brief period, allowing the aggressor to gather more resources unimpeded. This is classified as a ‘complex behavior’ within the game, as it requires more computing power, thought, or effort to carry out, as opposed to a singular directive such as a collecting apples.

  • A d v e r t i s e m e n t

The DeepMind team found that the greater the level of intelligence applied (or larger the neural network supporting the software agent), the more aggressive the software agents became.


The second game, the Wolfpack game, involves hunting for prey for a reward. The twist here is that other wolves in the…

Read more