Google Brain is Using AI to Create Sounds Humans Never Heard Before

Using artificial intelligence (AI), Google engineers are now producing entirely new sounds humans have never heard before. According to Wired, using the mathematical characteristics of notes that emerge from the combination various instruments, AI can create countless sounds no human has ever heard before.

Creating new sounds

Google Magenta, a small group of AI researchers building systems that can create their own art, has recently been working a project called NSynth or Neural Synthesizer. Its team members, Jessie Engel and Cinjon Resnick, are collaborating with members of Google Brain, the tech giant’s core AI lab where researchers explore neural networks.

NSynth’s goal is to give musicians a completely new range of tools they can use to make music, possibly taking the musical industry to a whole new level. These new sounds are created using an age-old practice taken to new heights thanks to AI. Critic Marc Weidenbaum pointed this out, stating:

The blending of instruments is nothing new. Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead

In order to create these sounds, NSynth used a massive sound database created by collecting wide range of notes taken from thousands of instruments. It was all then fed into a neural network that analyzed the data and was able to learn the audible characteristics of each instrument. It was then able to reproduce the sound of every single one of these instruments, and combine the sounds to create something entirely new.

Google’s team doesn’t just want to create new sounds. It has already built an interface in which one can explore the audible space between up to four different instruments at once. Moreover, the team can even create another neural network that would mimic these new sounds and combine them with those we already know.

Anyone who would like to use download and use NSynth’s sound database can do so as the team has released it in its research paper. Their new tool will be presented at Moogfest, an annual art, music and tech festival that will be taking place in Durham, North Carolina.

AI artists and music

Listen to this. It was written by Emily Howell – a computer program created by a UC Santa Cruz professor. She can write a huge amount of music and, when tested, most people couldn’t tell it wasn’t written by a human being. She’s even got her own YouTube channel.

Aiva (Artifical Intelligence Visual Artist), an AI machine composing classical music, created by Aiva technologies, has even been given composer status. This means it can write music under its own name, and has even released an album called Genesis.

These are the samples Wired uploaded:

With the use this new technology, there’s no telling what both AI and artists will be able to produce in the future. Not only will there be new possibilities for the entertainment industry, there may also be therapeutic possibilities.

If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.