It is evident there are a lot of advancements to be made in the world of artificial intelligence. A new project by MIT and Google has allegedly created an AI solution capable of linking sound, sight, and text. In doing so, the AI will gain a better understanding of the world around it. Quite an interesting development, especially when considering how consumer robots are under development as we speak.
In the world of artificial intelligence, there are still quite a few limitations to take into account. Most AI solutions are not capable of handling multiple types of input. More specifically, most artificial intelligence projects only focus on either sound, vision, or text. Finding a project combining all three aspects is very rare these days. That is not surprising, though, as we are still far away from creating a complete AI.
Then again, this new project by Google and MIT is bringing us one step closer toward achieving this goal. More specifically, they have allegedly developed an AI solution capable of using both noise, text, and images at the same time. This effectively gives the project half as many senses as humans have. It is quite an ambitious project, though, that much is evident.
For us humans, it is virtually impossible to only use one of our senses at any given time. Artificial intelligence, on the other hand, has always dealt with this limitation, until now. Being able to match what you see to what you hear is second nature for us. The average AI, on the other hand, does not have that luxury whatsoever. This also highlights how much work still needs to be done before we can even contemplate creating a proper artificial intelligence.
Creating an algorithm capable of learning and adapting like a human counterpart is not easy by any means. The new papers released by MIT and Google pave the way to successfully make this happen sooner rather than later, though. More specifically, the papers outline how it becomes possible to align the way an AI would see or hear or something, similar to how the human brain operates.
This also means the algorithm powering the AI is not learning something new by any means. Instead, the knowledge gained by its individual senses can be linked together as a way to gather and confirm existing knowledge. It is quite a novel way to go about things, that much is certain. Using this technology for self-driving cars, for example, would definitely increase the knowledge of the vehicle being used. It could learn what an ambulance is without ever seeing it, yet knowing the sound it makes.
Training this system will take a lot of work, even though several tests have proven to be quite successful already. For now, the algorithm is only fed “easy” information, but there is no reason to think it’s incapable of dealing with more complex matters. Using this type of groundbreaking technology will bring new life to the world of AI development over the coming years.
If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.
The Cheems token on the Binance Smart Chain (BSC) is gaining significant momentum, surging by…
The value of $LESTER plummeted by 40% in the past 24 hours, leaving its market…
In a bizarre turn of events, a young live-streamer known as Xiaohaige created the memecoin…
The crypto whale known as "convexcuck.eth" has made waves in the DeFi world, spending $2…
The launch of $ELIZA, a token introduced by Andreessen Horowitz (a16z) partner @shawmakesmagic, has sparked…
Cardano ($ADA) has been making waves in the crypto market, breaking away from the altcoin…