On Practical Artificial Intelligence

From killer robots to human replacements, “artificial intelligence” is a buzzword everyone must adapt to, or die. According to Andrew Ng, chief scientist at Baidu Research, “Five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” So what does AI really do, and why should you go for it? And more importantly, how?

A lot of what we do is dumb. Take advertisement as an example. Marketers think that they only need to get their message in front of the right set of eyes. As such, invasive ads pop up from everywhere to deliver irrelevant ad messages to users who avoid anything that resembles an ad. When I face an unskippable YouTube ad, I take off my headset and switch to another tab until that ad is gone.

Clearly, this method is not working. It has generalized the concept of human beings and disregards the fact that there are so many people out there with different likes and attitudes who cannot be categorized merely based on a specific keyword. Here is where AI comes into the picture.

With AI, we have the ability to potentially craft each and every message according to targeted individuals. A variation of this technique was used by Cambridge Analytica to affect political voters, yielding good results from a marketing perspective.

AI is stupid

Despite AI’s significant gains, even beating humans in competitions, AI in its current form is nothing more than statistics. Deep Learning is a bit different, but it also eventually boils down to a large matrix of knobs, each tuned to achieve a specific result. What AI does is pattern recognition – whether it is the pattern of a human face (face detection) or patterns in spam emails (spam detection). But AI has no understanding of what it is doing, and that is the beauty behind it – how numbers themselves can be ‘intelligent’.

As AI depends so much on stats, it needs two things: a lot of data (called “training sets”), and ways to interpret that data. According to Christopher Wylie, Cambridge Analytica developed 253 different algorithms based on harvested Facebook profiles. That is because different algorithms have different outcomes and accuracy, and it is up to a human to determine which algorithm works best (that is why they also have a “test set”).

A practical example

Let’s take a spam classifier as an example, which is easily handled using the naïve Bayes algorithm. The concept is simple: take two chunks of emails (spam and non-spam) and classify each and every word. This means that, for instance, we will learn that the word “Hello” appears 100 times in spam emails and “50” times in non-spam emails. As such, if we see the word “hello” in an email, it is more likely to be spam. Of course, this is combined with thousands of other words before the naïve Bayes algorithm determines the final score, giving us an overall understanding as to whether the email in question is more likely to be spam or non-spam. It’s that simple.

Of course, there are ways to make this more accurate – for instance, by removing stop words or detecting n-grams. But the idea is that when we have enormous amounts of data, patterns begin to emerge from formulas which yield surprisingly correct results despite their simplicity.

The problem with AI

Any software automation expert can tell you that just like any piece of coding, what we automate is practically repeating, automating and optimizing what humans have done previously. AI is not creative; it is repetitive. Thus, before applying AI, you must be sure to ask the right questions.

Advancements in AI have given us access to Natural Language Processing and image recognition, easily available and black-boxed via APIs. But instead of requiring AI to think like a human, we achieve the best results when we see how AI can augment human behavior. For instance, if I want a specific audience to see my blog posts, instead of creating an AI which would traverse their social media activity and try to understand their mindset with NLP, I can simply use statistics to see which posts they are sharing most, or when they are most active on Twitter. This approach is easy to build and offers fast returns, while the former would require ages to develop. Cambridge Analytica used this concept and applied it to Facebook “likes” to form psychological profiles of its targets.

The second barrier for AI is its cornerstone: data. With the advance of GDPR, companies cannot harvest user information as loosely as they used to – they must have the user’s approval, store their data safely, and give them the right to remove their data. While this can slow down AI advancements, it definitely serves a higher data handling etiquette. After all, GDPR is about how you collect and manage data, not how much of it you have. You must be sure to get your users’ consent to collect and store their information, and make sure to store it securely and give users the right to retrieve their data and to be forgotten.

To AI or not to AI

You should now have an understanding of what AI really is and how it can be used best (at least for now). AI will likely evolve, and speculation around General Intelligence and Artificial Superintelligence suggests an age of explosive growth in which one AI will create more intelligent AI. But before we get there, we can use stats to make us more intelligent.

Leave a Reply