3 reasons removing fake news from social media is a terrible idea, and what to do instead

In the wake of the US presidential election, an important issue has been brought up: social media is full of fake news. Some go so far as to suggest this may have significantly swayed the final result. Regardless of the answer to that question, the issue of false claims on the internet is a very profound one, and reaches much, much further than the realm of politics. There should be no sugar-coating it: misinformation is dangerous, and a potential threat to society.

As the debate rages on, some of the large internet companies (notably Google and Facebook) have chimed in, offering solutions with varying amounts of commitment. The specifics remain unclear, but there is a general notion that stories should be fact-checked. Based on this information, one supposes, fake news would be labelled, or perhaps even removed. I believe this is the wrong way to address the issue. Here’s why:

1. False facts are dangerous, but partial information is worse

The problem with fake news is unfortunately more complex than presenting wrong facts. A much more effective (and more common) approach is to try and steer people to a certain conclusion by giving them only some of the facts. Let’s take the following example: “Human exposure to X-rays can lead to severe burns, cancer and even death. X-rays are classified as a carcinogen by the World Health Organisation. Yet, every day, thousands of people are exposed to X-rays in hospitals across the world, including children and elderly people, two groups known to have weaker immune systems.”

Should a fact-checking system flag the above statement as false? Certainly not. None of that is factually incorrect. However, it’s clearly a misleading paragraph. This is achieved by leaving out the following piece of information: the doses of X-rays used in hospitals are very low, so low that any of those scary side-effects are extremely unlikely to occur. Humans are exposed to X-rays from natural sources every day at doses not much lower than this. This makes it easy to see why removing false facts is simply not enough.

I specifically avoided using an example from this year’s election (by fear of sparking an irrelevant debate), but one needs only to open a newspaper and pay attention to see examples of this strategy being used every single day (wether purposefully or not).

2. In the real world, information is often more nuanced than right or wrong

A smart engineer from Facebook might object to the first point by saying that a sophisticated enough system could handle the example above too. It might prompt the reader with a warning: “this information is misleading because it leaves out this and that crucial information.” But this would still be wrong, because it would be pushing the reader towards a specific conclusion, therefore introducing bias. The problem here, is that the paragraph about X-rays and the conclusion it leads to cannot be labelled right or wrong. Wether or not you think the exposure of children to X-rays is acceptable considering the minimal risks, is purely subjective. Any system steering you towards one or the other conclusion is a biased system.

Certainly, you may say, some facts are not nuanced. Some facts are simply right or wrong. And to some extent that is true. For instance, Steve Jobs died on October 5th, 2011. Any article claiming otherwise would most certainly be making a false claim. But as I hope to have shown above, the danger of misinformation doesn’t lie in such openly wrong statements (although that certainly constitutes one aspect of it). The real danger of bad news lies in other forms of manipulation which are much more subtle (such as using partial information). A program which identifies simple false claims will most likely miss these, ironically giving them more credibility by not discrediting them.

3. Facts are not dogma; they should be easy to challenge

Nothing is absolute. Every single fact can be challenged, questioned and potentially changed. That is the foundation of our modern world (and science), based on the enlightenment movement of the 18th century. The very idea of developing a central authority with the power to label facts as true or false, regardless of the nobility of its intentions, is a dangerous step backwards. Of course the aim will be good. But the results may be unintended. Facts are challenged all the time. Will having a system that labels every challenge to an established fact as “fake” result in an increased barrier to healthy debate? Who gets to decide which facts are right and which are wrong? And how do we decide when there is enough evidence to retire an outdated fact?

What we should do instead

The notion that we must remove or hide information is fundamentally flawed. People sharing fake news is a symptom, not the issue. Treating the symptom does little to eradicate the underlying problem. The real problem here is that the average reader doesn’t approach news critically (see my article on 7 ways to think more critically for more on this). Perhaps this is caused by a failure of our educational systems to prepare students for the age of information. Regardless of the cause, the issue is present and must be fought.

Certainly we cannot solve the issue by taking facts away. Instead, we should give more. We should provide tools to help develop critical thinking and approach facts with a more scientific mind. Imagine this: a box after each link (on Facebook, or Google, or anywhere else), which shows two columns. One columns says “here are other articles supporting this view", and the other says "here are articles that view this issue differently". Under both columns, a list of articles, their source and the total number of items in each category would be displayed. No special treatment, no stars next to reliable sources (what is a reliable source anyway?). Equal footing. No bias. We give as many tools as possible, make it as easy as we can for readers to come to their own critical conclusions. Nothing more.

Of course this "critical thinking" box could have many other features, such as an information tab which explains what the source of a given fact is, and the methodology (for instance if a fact was obtained through a census, etc). There is one caveat to this magical box sitting below links. It would be very hard to build. Even harder to build well. But the general idea, which is to help users reach their own informed opinion, is a powerful one, and one that big companies can start pursuing today.

It may be embarrassingly cliche, but the age-old adage fits the situation perfectly: give a man a fish and he will eat for a day, but teach a man to fish and he will never go hungry again. We must provide social media users with the tools for critical thinking, not feed them what we believe is right. The benefits of solving the underlying issue, as opposed to just the symptom, is that people will learn to spot dubious claims well beyond Facebook and Google. They will spot them on the radio, at dinner parties and during political speeches. That makes for a better world.

My plea to social media giants is simple: remember that the internet has always been about access to information, not the opposite. Let's solve the problem with more, not less. If not from me, then take it from Benjamin Franklin: "Printers are educated in the Belief that when Men differ in Opinion, both Sides ought equally to have the Advantage of being heard by the Public; and that when Truth and Error have fair Play, the former is always an overmatch for the latter."



If you enjoyed this story, consider subscribing to my website (you can use this link). That way, you'll automagically be notified every time a new story is online, right in your mailbox! I know, technology, right?

←7 ways to think more like a scientistThe race for intelligence amplification is on, and we'll probably all be winners→