Artificial Intelligence (AI) seems to be constantly improving. It’s already able to create AI by itself, and now Facebook has been training bots to learn how to negotiate, a complex task the tech giant believes necessary for its Messenger bots to really be helpful. However, the social media giant’s researchers inadvertently taught the bots how to lie, as they learned from real human conversations.
Chat bots that can negotiate and lie
So far, bots in Facebook’s Messenger app have only been able to perform simple tasks, such as helping us find products to buy online. Now, the Facebook Artificial Intelligence Research (FAIR) team created an experiment in which two bots were forced to negotiate with each other, so that in the future they’re be able to hold meaningful conversations with people.
In the experiment, both agents were shown two books, one hat, and three balls, and had to split these objects between them by bargaining with one another. Each agent was given a different value that represents how much it cared about a specific item, with neither agent knowing how much the other cared about any item.
Researchers then ran a series of negotiating scenarios in which it was impossible to give both agents exactly what they wanted. At first, the bots were given the task of responding based on the likelihood of the direction the conversation would take. Using this method, they barely negotiated, as according to researchers they were too willing to compromise.
Then, the team decided to train the bots to maximize profit. Furthermore, to encourage them to reach an agreement, both agents would be given 0 points if they either walked away, or didn’t strike a deal after 10 rounds.
This significantly improved their negotiation skills, so much so they faked interest in certain objects to add perceived value – they essentially lied about what they wanted. Facebook’s post reads:
There were cases where agents initially feigned interest in a valueless item, only to later “compromise” by conceding it — an effective negotiating tactic that people use regularly. This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals.
A successful experiment?
Using a software system called dialogue rollouts, researchers managed to get bots to plan ahead. They were taught how to negotiate through a compilation of data generated by 5,808 people negotiating with each other, according to Mashable.
The experiment’s goal was to get the bots to act as human-like as possible. We can, as such, call the experiment a success: lying is definitely part of a human’s repertoire. Moreover, when put to the test, most people weren’t able to tell they were chatting with trained bots and not with real people.
Ultimately, this type of research will help bots perform more complex tasks, other than booking a flight or finding the location of a restaurant.
As FAIR researcher Mike Lewis puts it:
You can imagine a world where this accelerates and eases interactions — for instance, future scenarios where people might use chat bots for retail customer support or scheduling – where bots can engage in seamless conversations and back-and-forth, human-like negotiations with other bots or people on our behalf
If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.