Understanding how logical agents cooperate or fight, especially in the face of resource scarcity, is a fundamental problem for social scientists. This underpins both our foundation as a social species, and our modern day economy and geopolitics. But soon, this problem will also be at the heart of how we understand, control, and cooperate with artificially intelligent agents, and how they work among themselves.
Researchers inside of Google’s AI DeepMind project wanted to know whether distinct artificial intelligence agents worked together or competed when faced with a problem. Doing this experiment would help scientists understand how our future networks of smart systems may work together.
The researchers pitted two AIs against each other in a couple of video games. In one game, called Gathering, the AIs had to gather as many “apples” as possible. They also had the option to shoot each other to temporarily take the opponent out of play. The results were intriguing as the two agents worked harmoniously until resources started to dwindle; at that point the AIs realized that temporarily disabling the opponent could give each of them an advantage and so started zapping the enemy. As scarcity increased so did conflict.
Interestingly enough, the researchers found that introducing a more powerful AI into the mix would result in more conflict even without the scarcity. That’s because the more powerful AI would find it easier to compute the necessary details, such as trajectory and speed, needed to shoot its opponent. So, it acted like a rational economic agent.
However, before you start preparing for Judgement Day, you should note that in the second game trial, called Wolfpack, the two AI systems had to closely collaborate to ensure victory. In this instance, the systems changed their behavior maximizing cooperation. And the more computationally powerful the AI, the more it cooperated.
The conclusions are fairly simple to draw, though they have extremely wide-ranging implications. The AIs will cooperate or fight depending on what suits them better, as rational economic agents. This idea might underpin the way we design our future AI and the methods we can use to control them, at least until they reach the singularity and develop superintelligence. Then we’re all doomed.
5 Comments - Add comment