Advancements made in the world of artificial intelligence are both intriguing and terrifying at the same time. DeepMind, the AI system developed by Google, has been getting quite a lot of attention as of late. A recent change made to Google’s project, forces DeepMind to become “highly aggressive”. While this will only occur during stressful situations, it goes to show we need to tread very carefully when it comes to letting AI solutions evolve.
DeepMind Becomes More Advanced Than Before
One of the main selling points of Google’s DeepMind AI is how this project is capable of learning independently from its own memory. That particular skill set was demonstrated perfectly when DeepMind beat the world’s best Go players with relative ease. Ever since that time, DeepMind has not been sitting by idly, even though not much news has come out over the past few months.
Researchers are probing DeepMind with new ideas to see how the software will respond to different environments. It appears this AI solution shows very troublesome signs of aggressive behavior, especially when it finds itself in a situation where a loss is virtually the only possible outcome. Similar to the human counterpart, DeepMind will try to find a way out, even if that means becoming far more aggressive than one would anticipate.
Competition brings out both the best and worst in humans, and it would appear the same goes for AI solutions. When two DeepMind “agents” were competing against one another in a virtual game, they eventually resorted to in-game violence do outdo the other. Even though doing so would not give the agent an additional reward per se, the agent did not think twice about resorting to these extreme measures. This development surprised researchers, even though it was a likely outcome from the start.
What is even more intriguing is how this type of behavior only seems to occur once the DeepMind agents competing against one another are becoming more complex. Smaller agents work just fine in a peaceful manner, but DeepMind becomes more aggressive and greedy as the complexity level increases. More intelligent agents are able to learn from the environment, which strikes eerie similarities to how humans would behave when faced with a similar stressful environment.
Specifically, DeepMind was playing a game of who can collect the most apples, when there are a good amount of apples, the agents tried to strategize which apples to pick in the most efficient manner. However, when the amount of apples was reduced, the two agents resorted to more violent means and started laser tagging each other in order to knock each other off the map. Take a look for yourself:
Putting any advanced AI system in charge of competing interests in real-life situations could become very catastrophic in the long run. Once objectives are not balanced or met entirely as expected, things will go awry very soon. Although these are still the early days for Deepmind and other similar artificial intelligence projects, it is evident there is still a lot of work that needs to be done. Just because a robot or AI is designed by humans does not mean they have our best interests at heart by any means.
One thing to take away from all of this is how AI engineers need to build compassion into their creations. That is much easier said than done when dealing with tools that operate based on irrefutable logic. Current AI systems have narrow capabilities, and it remains unclear how beneficial these solutions will be for our society as a whole. Moreover, there is a very real chance AI solutions will cause irrevocable damage to our society as well. An interesting future lies ahead, that much nobody can deny.
If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.