How social media is pushing us toward 1984

Social media platforms are manipulating users and distorting our sense of reality—just as George Orwell predicted.

https://i.imgur.com/jrmjert.jpeg

In his dystopian novel, Nineteen Eighty-Four , George Orwell created “Big Brother” and, with it, the perfect metaphor for Big Tech. Orwell’s “telescreens,” which can’t be switched off and which record every conversation and monitor every movement of his characters, bear striking resemblance to our smartphones. And just as telescreens automatically push programs on their viewers, social media’s algorithms today decide what we see and shape our vision of the world. Orwell’s heroes, under the Party’s surveillance, try to ban certain ideas from their minds, in a way that resembles how we are starting to monitor our thoughts and actions under the influence of the internet.

While this metaphor is useful (especially as China actively works to turn fiction into a reality), as I reread the novel over the summer, I was struck by how Orwell seemed to anticipate our relationship with social media in other ways. With his concepts of Two Minutes Hate, a daily ritual of outrage orchestrated by the Party, and Newspeak, a deliberately ambiguous language that’s designed to limit people’s ability to communicate in nuanced ways, Orwell revealed how people’s thoughts, emotions, and ultimately actions can be manipulated. The dynamics he describes uncannily reflect the way social networks now influence the lives of billions of people around the world.

TWO MINUTES HATE

In Orwell’s novel, people interrupt their activities each day and stand in front of their telescreens to flame enemies and celebrate Big Brother. The enemies change regularly, but the ritual, called the Two Minute Hate, doesn’t:

The horrible thing about the Two Minutes Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in. Within thirty seconds any pretence was always unnecessary. A hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledge hammer, seemed to flow through the whole group of people like an electric current, turning one even against one’s will into a grimacing, screaming lunatic. And yet the rage that one felt was an abstract, undirected emotion which could be switched from one object to another like the flame of a blowlamp.

This description sounds fantastical until you consider what’s happening on our social networks. During the 2014 Gamergate scandal, a woman and her family become the target of a massive harassment campaign that included leaking her personal photos and threatening her with rape and death. Things have only gotten worse. Today, inflammatory hashtags and hoaxes are regularly promoted by fake accounts until they become official trends and are picked up by real people and even the mainstream media. The number of right-wing and conspiracy-spewing influencers propagating racist and misogynist memes to their followers is increasing. And all of this is not just enabled, but empowered and compounded, by social media platforms.

Beyond these highly organized hate campaigns, everyday harassment and bullying have also spiked. According to a report from the Pew Research Center, 59% of US teens have been bullied or harassed online. In the runup to December 2019’s U.K. general election, an investigation by the BBC and the liberal think tank Demos found a surge in abuse and death threats on Twitter directed at parliamentary candidates. Around 7% of the tweets (i.e. 334,000) received by candidates were categorized as abusive.

Before you think “not me,” consider whether you have ever participated in social media outrage, only to wake up the next day and think “How could I have done that?” Have you ever realized that you didn’t have all the facts and yet felt compelled to react? Have you ever posted, liked, or retweeted an article with a particularly incendiary title without actually reading it? If you have, you’ve participated in the social network version of Orwell’s Two Minute Hate.

And while I wish we could all be a little more in control of our online urges, Orwell’s dystopian novel reminded me that the real issue is that viral outrage is more than an accidental feature of social networks. It is the core of their products, of what they’re aiming to create: a megaphone that continuously puts content in front of you that is so powerful, emotional, and extreme that, as Orwell’s says, it’s “impossible to avoid joining in.” And as you keep scrolling, finding new topics for rage is easy: YouTube, Facebook, and Twitter’s algorithms make sure of that.

The object of people’s fury is as irrelevant to the Party in Orwell’s novel as it is to social networks today. What matters is that people who are enraged are deeply engaged and more easy to manipulate.

NEWSPEAK

Equally scary is how, in Nineteen Eighty-Four , the Party uses Newspeak to strip meaning out of language, making it impossible for people to have certain thoughts. Reducing the number of words available to people prevents them from having proper feelings and ideas, and makes the world more difficult to process and comprehend. When language loses its meaning (“ war is peace, freedom is slavery, ignorance is strength “), the Party is in control of what is considered reality. Facts and independent thought don’t really exist anymore:

By 2050—earlier, probably—all real knowledge of Oldspeak will have disappeared. The whole literature of the past will have been destroyed. Chaucer, Shakespeare, Milton, Byron—they’ll exist only in Newspeak versions, not merely changed into something different, but actually contradictory of what they used to be. Even the literature of The Party will change. Even the slogans will change. How could you have a slogan like Freedom is Slavery when the concept of freedom has been abolished? The whole climate of thought will be different. In fact, there will be no thought, as we understand it now. Orthodoxy means not thinking—not needing to think. Orthodoxy is unconsciousness.

Our own language is becoming more reductive and simplistic, as a result of social media’s character limits and use of hashtags to surface and promote catchy, easy-to-understand ideas, events, and trends. On these platforms, nuance is not rewarded. And by allowing any opinion (no matter how fringe) to take on the appearance of fact, social networks have made it harder for us to comprehend our reality.

SIMPLE SOLUTIONS

Orwell’s book concludes with the main character, Winston, totally accepting the Party’s rule, fully participating in the ritualistic Two Minute Hate, and believing that two plus two equals five. We don’t have to be Winston, but more importantly Big Tech doesn’t have to behave like the Party.

What I find disheartening is that there are a few simple things that have been advocated for a long time that would, if not solve the problem entirely, at least help significantly:

  • Add a warning message. Social media companies have tried to eliminate every friction point for users in order to maximize the volume of communication and engagement on their platforms. But what if they took a different approach? What if, when a user was about to post or tweet something inflammatory, social media companies interrupted with a pop-up message saying something along the lines of: “Are you sure?” Instagram implemented something similar in 2019 to limit damaging reactive communications. Though this approach won’t prevent everyone from posting outrageous content, it will force a lot of us to pause and reflect before we do so.
  • Stop showing suggested posts or videos as a way to keep users scrolling / viewing even when they have seen everything that the people they follow have posted. YouTube launched AutoPlay in 2015, serving its viewers a series of continuously playing suggested videos. This feature is largely considered as the main driver for the dissemination of extreme content. Instagram, which had resisted until now, changed its policy in August and began including suggested posts in users’ feeds.
  • Aggressively fights for facts . When someone writes that “two plus two equal five,” make it your mission to at least stop propagating the lie, no matter how exciting it is for your users. This will be an incredibly difficult and likely never-ending battle. Mistakes will be made. But they’re worth it. Efforts so far have been too timid; investments in robust fact checking teams and processes need to be ramped up dramatically. Researchers are still divided on whether placing warning messages alongside false information is effective in limiting the sharing of it. Some have concluded that it could make users less likely to shared; others have seen no impact. But it’s worth trying.
  • Relentlessly identify and shut down accounts, pages, and forums that promote hate. A study on the effects of a ban of two hate communities by Reddit in 2015 demonstrated that “by shutting down these echo chambers of hate, Reddit caused the people participating to either leave the site or dramatically change their linguistic behavior.” In other words, the level of hate decreased altogether, even when the same users continued to use Reddit and joined other forums.

All of these solutions come down to a simple idea: A truly human-centric business—one that wants to improve humanity—should support its users’ strengths, rather than exploit our weaknesses. Though our world today may resemble Nineteen Eighty-Four , there’s still time for us to write a different ending.

Maelle Gavet has worked in technology for 15 years. She served as CEO of Ozon, an executive vice president at Priceline Group, and chief operating officer of Compass. She is the author of a forthcoming book, Trampled by Unicorns: Big Tech’s Empathy Problem and How to Fix It.

https://www.fastcompany.com/90545787/how-social-media-is-pushing-us-toward-1984

2 Points