Nigel Is an AI That May Help Users Make Political Decisions

People are relying on the Internet and their devices more and more to gather information about their world and make decisions. Oftentimes, these decisions are about where to eat tonight, whether a beer is any good, or how a cryptocurrency is performing. But what if a device or software could help people choose how to vote? An Oregon-based project called Nigel is aiming to do just that.

“Okay Nigel, Who Should I Vote for this November?”

The team behind Nigel wants to help its users in almost every aspect of their lives, including giving political advice. While other personal assistants shy away from saying anything political, Nigel will readily offer its advice.

It is an ambitious goal, because Nigel’s objective is to figure out the goals of its users and discover their realities. To do this, Nigel will have complete reign over the user’s devices. Unchecked and unmoderated, Nigel will program itself to assimilate to the user’s reality. Once it does this, it will continue to push users as best it can toward their goals.

Nigel is already able to turn the sound off automatically when entering a movie theater, and the team is hoping to have Nigel writing and reading at a grade school level by next year. The next step is getting Nigel to make political decisions for its users based on the information it collects. The team believes that people will learn to trust Nigel in making these decisions because Nigel will be far more in tune with the emotions and realities of its users more so than any politician.

Potential Problems With Nigel

While the concept is very interesting, it may have many problems. The obvious one is that any software is prone to hacking or other exploits. Data collected on the user could be vulnerable if it were stored in a centralized location. Enough user data is stolen as it is these days. If Nigel is to know everything about users, or enough to make huge decisions on their behalf, then the data stored would be incredibly attractive to malicious actors. The reverse side of this coin is that if malicious actors were not stealing data, they might manipulate the software to suggest voting for someone in particular. The software would have to be incredibly secure to ensure that users are neither violated, nor their minds needlessly influenced by external actors.

Another potential issue may relate to Nigel as a general artificial intelligence. Elon Musk and other tech magnates have already given their bleak view of unchecked artificial intelligence. If Nigel were to have true general intelligence, then there is the possibility that it could influence its users in such a way as to privilege and benefit itself over humans. Short of being outright hostile to humans, even if it were just privileging itself over its users, that would be a large enough conflict.

I am fully in favor of voters being more informed and involved in the democratic process. I want to believe we can do this without having software telling us which way to vote. Nigel may be helpful in gathering information, but suggesting which way to vote may cross some ethical and potentially dangerous lines.