The singularity is coming

But it’s nothing to worry about

Bringing up the singularity—the point of no return after which our feeble human minds are unable to even fathom what the future may bring—is a sure-fire way to ensure the fun engineering party you’re attending (what? a fun engineering party? invite us next time!) ends in a passionate feud. This seemingly all pervasive topic is a favourite amongst the technogeek elite—the sic-fi writers, futurists, engineers and electric-car-building-billionaires of this world. In more practical terms, the singularity is often defined as the point where artificial intelligence (AI) algorithms become more capable than their human creators.

There are two prevailing opinions regarding what would happen after the singularity (hence the passionate feud). The first view, which is very optimistic, goes a little like this: once sentient machines become smarter than us, we must trust that they will generously continue helping us to achieve our goals (cure illnesses, improve our quality of life, etc). Of course, the defenders of this approach will argue that we can make sure machines will always have our best interest in mind by including some basic rules deep within their design (such as Asimov’s three rules of robotics). The second view is—how shall we put this lightly—slightly more pessimistic. According to this version of events, sentient machines will realise we are not helpful to them (or worse, harmful to them) and destroy us, enslave us or otherwise harm us. This is the Terminator style apocalypse, and it sells well (to be fair, the movie where the robots and humans live happily ever after would never make a big splash at the box office). Hopefully, by the time you reach the end of this article, you will disagree with both views.

But wait, you may ask, before I start preparing my house for the machine apocalypse (though if you are, remember to pack electromagnets), are we even sure this will happen? As often when the matter at hand involves predicting the future, there are conflicting reports. Ray Kurzweil, author of The singularity Is Near, believes that, well, the singularity is near. More seriously, he narrows it down to the year 2045, give or take. Others, like the scientist and author Steven Pinker, dismiss the idea altogether. Some of the criticism about the concept of a technological singularity is very reasonable and embodies the kind of scientific skepticism which should be revered. For instance, some argue that technological progress will not indefinitely follow an exponential growth, but may on the contrary slow down as each further improvement becomes harder because of increasing levels of complexity and knowledge (this is known as the complexity brake).

Regardless of how we look at the problem, I think it is only fair to assume that mass extinction aside, we will inevitably develop artificial intelligence on par, or superior, to our own. It may take longer than expected, much longer, but that does not prevent it from happening. To argue that it is impossible to develop an algorithm smarter than ourselves is likely a common, although misguided, type of anthropocentric wishful thinking. There is no escaping it—the singularity, as in machines which are smarter than humans, will happen sooner or later.

Should we prepare? Can we prepare? That would probably depend on which side of the argument you stand, the optimistic side, or the apocalyptic one. I believe the singularity won’t matter, because by then, we will be so intimately merged with machines, that it won’t be us vs them. Instead, the singularity will directly benefit us. With our current technology, it makes a lot of sense to distinguish between humans, and the tools they use—the machines. But in the future, as we develop increasingly complex and efficient brain to machine interfaces, the distinction will start to blur. We will have a tight symbiotic relationship with our technology, allowing us to take full advantage of it. By that time, asking about the singularity won’t make much sense anymore. The evolution of the human mind and its capabilities will be a progressive process, and the day we become incrementally smarter, thanks to our merger with computers, will not be a significant day at all. I once read this “mind-blowing fact” somewhere: there is a specific day in your life, when your mother carried you in her arms for what was to be the last time, yet that day bore no particular importance, and nobody noticed it. The singularity will be similar. We may see it thinking back, decades later, maybe pin down the year it happened, but it will carry no special significance as it unfolds.

Of course, there is still the possibility that we will develop super-intelligent AI before we have the technology to seamlessly interface ourselves with computers. If that were to happen, then we may find ourselves faced with the common dilemma we discussed earlier. Both areas (AI and neuroprosthetics) are almost unimaginably complex. Both have captured the imagination of thousands of scientists and engineers around the world. Both have spawned enormous and very active fields of research. And in many instances, there is a fair amount of overlap (AI is used on a daily basis in neuroprosthetics, and the study of the brain is often key to designing novel AIs). It thus seems reasonable to assume that they will continue to develop in parallel, and that extreme advances in one will not happen without similar advances in the other.

Although the singularity is inevitable, it won’t be something to worry about. In fact, we won’t even notice it.



If you enjoyed this story, consider subscribing to my website (you can use this link). That way, you'll automagically be notified every time a new story is online, right in your mailbox! I know, technology, right?

←The bicycle for the mindWhy free will doesn’t exist→