Google’s DeepMind AI Is Now Capable Of Self-Teaching New Things

One of the primary selling points of artificial intelligence is how this technology can learn over time. Google, who developed DeepMind, recently bolstered their AI solution to make it learn new tricks faster. According to tests, DeepMind is now capable of learning close to 87% of expert human performance in games. This is an exciting development, although its real life use cases remain to be determined.

Another Step Forward For Google’s DeepMind AI

Machine learning and artificial intelligence go hand in hand, and Google’s DeepMind is slowly picking up a few new tricks along the way. Increasing the performance of this AI solution is of the utmost importance, even though its track record speaks for itself. With new learning methods under the hood, DeepMind can train itself without a human element teaching it new things.

That being said, the DeepMind learning process has very little to do with useful real life situations. For now, the AI has increased its capability of controlling pixels on a computer screen. For artificial intelligence solutions, controlling pixels on a screen can be compared to humans learning to use their hands and feet.

Secondly, DeepMind can now properly evaluate the rewards from a game based on its previous performances. Combining these two new tools with the previously introduced Deep Reinforcement Learning methods make Google’s AI quite a robust offering. When technology learns to think on its own, we are one step closer to creating conscience.

On the other hand, artificial intelligence capable of teaching itself new things can be seen as a troublesome development.  With no human involvement, the question becomes what and how DeepMind will teach itself. Although this is vastly different from the Skynet scenario people may envision, there is a somewhat legitimate reason for concern.




For now, all of this new technology seems to relate to gaming environments rather than the real world. DeepMind is also not connected to the Internet, so there is no immediate “danger” of the AI learning things that can cause significant damage. However, by mimicking human performance in games, it is only a matter of time until DeepMind will do so in other aspects as well.

The bigger question is what AI solutions such as DeepMind will be capable of in the future. While financial institutions are exploring AI to mimic human encounters, its potential is far greater. That is also what makes these developments so worrisome, as people only tend to see the bad side of things. Technology needs to be treated with respect rather than opposed whenever innovation comes along.

If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.

  • Wynand Vermeulen

    If they can play games, which I suck at, but I am good at being a coder and consultant, will AI suck at being consultants?

    • Roger Davies

      Let’s hope so!

  • Roger Davies

    Google didn’t develop Deep Mind it was Demis Hassabin(sic) and the team at his company which he co founded with another chap who’s name escapes me at the minute, which Google then bought.

    If you are going to report then you should at least have done more research then twts like me who are reading your dross