Google DeepMind Pits AI against AI, Shows Distinctly Human Behavior?

Google DeepMind - AI versus AI

It’s obvious that the team behind Google DeepMind, the AI research unit of parent company Alphabet, has a lot of time on its hands. At least, that’s what it seems like, because they’re now pitting AI against AI to see how they behave with each other – do they court confrontation or do they end up cooperating?

In a series of games, Google DeepMind pits two or more AI entities against each other to see whether they collaborate to achieve a common goal or they start looking out for themselves and become selfish. The results, though surprising, aren’t really revelational. You’ll see what we mean.

In the first game hosted for the Google DeepMind AI programs, called Gathering, two AI players need to gather apples from a common pile. They are both given the option of disabling the other player for a few seconds with a laser beam so they can collect more for themselves.

The results were interesting, because the AI players didn’t zap each other when apples were aplenty, but when the supply started to vanish, they started zapping each other more and more so they’d get the advantage.

What’s interesting is that when a more powerful AI player was thrown into the mix, it started zapping the others right out of the starting gate, practically. It used its “superior” capabilities to zap the other players even when there were enough apples for everyone.

The surprising part was that the researchers hypothesized that the more powerful AI player zapped more often simply because it could! Zapping requires more compute power, and it had that, so it just chose to use its strength to win.

Isn’t that basic human nature – survival of the fittest and all that?

In yet another game called Wolfpack, the more powerful AI player was more likely to cooperate with other players. The object of the game was for two players to capture a third player. When captured, not only did the captor earn points, but points were awarded to anyone near the captor as well.

In this case, the more powerful AI once again used its computational superiority to track and “herd”. It just so happened that its “strength” meant it had to cooperate.

Again, this is a very distinct human trait where survival of the “group” benefits all the individuals in that group.

Here’s What Google DeepMind Researchers Really Found

The odd thing is, these behaviors can be influenced when the rules of the game are changed. For example, if more rewards are given for cooperative behavior, then that’s what the more powerful AI player ends up doing, and vice versa.

That gives us a tremendous clue to the future development of AI capabilities. Everything appears to depend on what the “rewards” are. If behavior can be tweaked by altering the reward system, that means AI can be controlled in a much better way.

The downside of that is that it all depends on the programmer of the AI entity in question. Will he or she give them the right values system by rewarding the right things?

The purpose of this exercise was to understand how to “control complex multi-agent systems such as the economy, traffic systems” and so on.

Now that the researchers have understood that the rules of the game decide how these AI programs interact to achieve an individual or group, there’s still one big question that remains unanswered….

Who sets the rules?!

Thanks for reading our work! Please bookmark 1redDrop.com to keep tabs on the hottest, most happening tech and business news from around the world. On Apple News, please favorite the 1redDrop channel to get us in your news feed.

Source