Is artificial intelligence intelligent at all?

The world of artificial intelligence (AI) went into a frenzy when AlphaGo, a computer algorithm developed by Google, beat the world’s top Go player. This was remarkable for two reasons. Not only is Go a much harder game for computers to play than chess (which computers mastered almost 20 years ago), but experts in the field were expecting this impressive step to happen as much as ten years later than it did.

Deservedly, the world’s media outlets took this opportunity to re-ignite the debate about artificial intelligence and how it may affect our socio-economical order in the coming years. One particularly important topic in this debate is the notion that improved artificial intelligence could eventually take over many people's jobs. Which is either a terrifying prospect, or an immense opportunity for humankind, depending on whom you ask.

When we imagine computer algorithms taking over jobs and tasks usually performed by humans, most of us get a queasy feeling. People tend to fall into two diametrically opposed categories. The first are those who think computer algorithms could never replace humans (except perhaps in some very simple and repetitive jobs), the second are the tech enthusiasts and futurists who believe that the inevitable progress of science will deliver increasingly complex machines which will not only replace us, but eventually surpass us too (this is called the singularity, which I explore in more detail in this article).

The first group are the AI skeptics. Usually this camp argues that machines cannot make the nuanced, adaptive responses humans can—because they must follow a strict set of rules, and these rules could never account for every possible scenario. Unfortunately this line of reasoning is flawed, because it misunderstands the way modern machine learning algorithms work—relying instead on a more traditional view of computer software, where everything happens according to a set of logical decisions and relationships defined by the programmer (if this happens, then do that). But this is wrong. The whole field of artificial intelligence is based on a very different approach. Indeed, machine learning algorithms are designed to learn how to generalise. The basic idea is as follows: a machine learning algorithm is shown some examples of data (e.g. a photo recognition software may be shown 10 different images of cars, 10 images of cows, etc). From this data (called a training set), the software learns to extrapolate and generalise. Once trained, the algorithm will be able to understand new data (in our example, it will be able to say if a photo contains a car or a cow, even if it has never seen the photo before). This is important, because it means the algorithm was able to learn rules by itself. No programmer had to say: "if it has four wheels, then it is a car” or other arbitrary conditions. Instead, the software extracted some characteristics from the initial data it was shown, and created its own internal representation of a car. In fact, it is often very hard to peer back inside the algorithm and understand how it takes decisions.

Clearly, this is not the kind of simple rule-following-machine we are used to dealing with. Once we realise how these programs work, we start to see why they are called artificially intelligent. In many ways, they mimic the learning process of a child, who accumulates experiences and uses them to extrapolate and approach novel situations with ease. And because AI learns how do to things on itself, we won’t need to understand how our brain solves some complex tasks in order to teach them to machines—we’ll simply let them figure it out.

The second group are the AI enthusiasts. This camp tends to exaggerate in the other direction—placing an overly optimistic amount of hope in artificial intelligence’s ability to overcome all challenges. Although in the long term this group will eventually be right, the reality is that for the moment artificial intelligence is still very limited. Despite AI’s ability to generalise and adapt to novel situations, for now this is confined to a single subject at any given time. The image recognition software which can recognise cars and cows in photographs would be completely unable to play Go for instance. Training a single algorithm to excel at several heterogeneous tasks is the real challenge, and one which may not be easy to solve in the short term.

Although artificial intelligence programs are certainly smarter than what the AI skeptics would think, they are still a lot more stupid than the AI enthusiast like to believe. We may call them intelligent, but in reality they are based on simple statistics and basic networks of interconnected processing nodes (neural networks). Once we understand how these algorithms work, they lose some of their mystique. It’s the same feeling one gets when an impressive card trick is revealed—the magic is simply gone.

But here is something to think about: perhaps the fact that relatively simple computer algorithms can replicate some complex human behaviours is an indication that our brains may be simpler than we think. Deep down, the reason the notion of AI replacing people is so hard to swallow is the human fallacy that our intelligence is somehow the consequence of some deeper, perhaps mystical force, and that a “simple" machine could not possibly replicate the wide range of behaviours and flexible decision making humans demonstrate. Well, we’re in for a humbling surprise—not only will AI surpass humans in almost every possible occupation, it will also demonstrate that intelligence isn’t as unique and special as we thought.



If you enjoyed this story, consider subscribing to my website (you can use this link). That way, you'll automagically be notified every time a new story is online, right in your mailbox! I know, technology, right?

←What is CRISPR, how does it work and what does it mean for our future?Genetics, bionics or artificial intelligence, what will the future be made of?→