Several artificial intelligence projects have been created over the past few years, most of which still had some kinks to work out. For some reason, multiple AI solutions showed signs of racism once they were deployed in a live environment. It is one of the major hurdles to overcome before artificial intelligence services and products can effectively become a part of mainstream society.
Although this incident is not racism in its extreme form, it highlighted an underlying problem with AI-driven recognition solutions. After sending a data packet containing nearly 2,000 faces to an AI solution, Chinese researchers discovered their project showed a high degree of bias. Their software discriminated faces based on structural features for predicting criminality, such as eye inner corner distance and lip curvature. This result showed there is still a lot of fine-tuning to perform when solutions like these are used to analyze facial features.
The hit sensation of mobile gaming in 2017 has shown the world it is somewhat racist. Once the game was released in July, users complained about how very few Pokemon could be captured in African-American communities. It turned out the creators of the AI-driven algorithm powering Pokemon Go did not provide a diverse training set, nor did they spend time in those neighborhoods. The AI solution was not at fault for this degree of racism, as it was a human error that caused this unfortunate mishap.
It is never a positive sign to see an AI solution that is designed to fight crime show an obscene amount of racism. Northpointe has designed an AI system to predict the chance of alleged offenders to commit a new crime in the future. Unfortunately, the algorithm used for this solution showed a severe racial bias.
Aftrican-American offenders were far more likely to be marked as high-risk, according to the AI. Additionally, it later turned out the software was not up for the task of predicting anything in general. A failed artificial intelligence experiment, but another valuable lesson learned. Oddly enough, no one has heard from this artificial intelligence project ever since.
On paper, it makes some sense to use artificial intelligence as a judge in a beauty contest. Unfortunately, the first AI of its kind to judge such a contest did not do its job all that well. The algorithms should have evaluated contests based on criteria linked to human beauty and health. However, without the proper diverse training set, all winners were white women. It is becoming evident a lot of these artificial intelligence solutions show signs of “white supremacy” for some reason.
When it comes to the Tay chatbot developed by Microsoft, there is not much left to say. Releasing an AI-powered chatbot on the Twitter platform may not have been the brightest idea the technology giant has ever had. It did not take long before Tay turned into a pure troll, showing signs of KKK-behavior combined with an anti-female attitude. It took less than one day for Tay to be yanked offline. No one has heard from this AI bot ever since, even though Microsoft promised to make “adjustments”.
If you liked this article, follow us on Twitter @themerklenews and make sure to subscribe to our newsletter to receive the latest bitcoin, cryptocurrency, and technology news.
Rollblock is quickly becoming the best crypto presale to buy, delivering unmatched value for its…
While Rollblock's continues its crypto presale, with its value increasing regularly, Polkadot (DOT) and Uniswap…
As the cryptocurrency market gears up for a bull run, IntelMarkets (INTL) is attracting significant…
In the past, Chainlink (LINK) and Solana (SOL) have been among the most discussed altcoins…
The crypto market is abuzz with excitement as 2025 approaches. While Bitcoin continues to dominate…
The cryptocurrency market never sleeps, and every day feels like an adventure. From household names…