In a recent article for The Atlantic, Adrienne LaFrance compared Facebook to a Doomsday Machine: “a device built with the sole purpose of destroying all human life.” In the Netflix documentary The Social Dilemma, the filmmakers imagine a digital control room where engineers press buttons and turn dials to manipulate a teenage boy through his smartphone. In her book Surveillance Capitalism, the Harvard social psychologist Shoshana Zuboff paints a picture of a world in which tech companies have constructed a massive system of surveillance that allows them to manipulate people’s attitudes, opinions and desires.
In each of these dystopian depictions, people are portrayed as powerless victims, robbed of their free will. Humans have become the playthings of manipulative algorithmic systems. But is this really true? Have the machines really taken over?
It is alleged that social media fuels polarization, exploits human weaknesses and insecurities, and creates echo chambers where everyone gets their own slice of reality, eroding the public sphere and the understanding of common facts. And, worse still, this is all done intentionally in a relentless pursuit of profit.
At the heart of many of the concerns is an assumption that in the relationship between human beings and complex automated systems, we are not the ones in control. Human agency has been eroded. Or, as Joanna Stern declared in the Wall Street Journal in January, we’ve “lost control of what we see, read — and even think — to the biggest social-media companies.”
Defenders of social media have often ignored or belittled these criticisms — hoping that the march of technology would sweep them aside, or viewing the criticisms as misguided. This is a mistake: Technology must serve society, not the other way around. Faced with opaque systems operated by wealthy global companies, it is hardly surprising that many assume the lack of transparency exists to serve the interests of technology elites and not users. In the long run, people are only going to feel comfortable with these algorithmic systems if they have more visibility into how they work and then have the ability to exercise more informed control over them.
Companies like Facebook need to be frank about how the relationship between you and their major algorithms really works. And they need to give you more control.
Some critics seem to think social media is a temporary mistake in the evolution of technology — and that once we’ve come to our collective senses, Facebook and other platforms will collapse and we’ll all revert to previous modes of communication. This is a profound misreading of the situation — as inaccurate as the December 2000 Daily Mail headline declaring the internet “may just be a passing fad.” Even if Facebook ceased to exist, social media won’t be — can’t be — uninvented. The human impulse to use the internet for social connection is profound.
Data-driven personalized services like social media have empowered people with the means to express themselves and to communicate with others on an unprecedented scale. And they have put tools into the hands of millions of small businesses around the world which were previously available only to the largest corporations. Personalized digital advertising not only allows billions of people to use social media for free, it is also more useful to consumers than untargeted, low-relevance advertising. Turning the clock back to some false sepia-tinted yesteryear — before personalized advertising, before algorithmic content ranking, before the grassroots freedoms of the internet challenged the powers that be — would forfeit so many benefits to society.
But that does not mean the concerns about how humans and algorithmic systems interact should be dismissed. There are clearly issues to be resolved and questions to be answered. The internet needs new rules — designed and agreed by democratically elected institutions — and technology companies need to make sure their products and practices are designed in a responsible way that takes into account their potential impact on society. That starts — but by no means ends — with putting people, not machines, more firmly in charge.
It Takes Two to Tango
Imagine you’re on your way home when you get a call from your partner. They tell you the fridge is empty and ask you to pick some things up on the way home. If you choose the ingredients, they’ll cook dinner. So you swing by the supermarket and fill a basket with a dozen items. Of course, you only choose things you’d be happy to eat — maybe you choose pasta but not rice, tomatoes but not mushrooms. When you get home, you unpack the bag in the kitchen and your partner gets on with the cooking — deciding what meal to make, which of the ingredients to use, and in what amounts. When you sit down at the table, the dinner in front of you is the product of a joint effort, your decisions at the grocery store and your partner’s in the kitchen.
The relationship between internet users and the algorithms that present them with personalized content is surprisingly similar. Of course, no analogy is perfect and it shouldn’t be taken literally. There are other people who do everything from producing the food to designing the packaging and arranging the supermarket shelves, all of whose actions impact the final meal.
But ultimately, content ranking is a dynamic partnership between people and algorithms. On Facebook, it takes two to tango.
In a recent speech, the Executive Vice President of the European Commission, Margrethe Vestager, compared social media to the movie The Truman Show. In it, Jim Carrey’s Truman has no agency. He is the unwitting star of a reality TV show, where his entire world is fabricated and manipulated by a television production company. But this comparison doesn’t do justice to users of social media. You are an active participant in the experience.
The personalized “world” of your News Feed is shaped heavily by your choices and actions. It is made up primarily of content from the friends and family you choose to connect to on the platform, the Pages you choose to follow, and the Groups you choose to join. Ranking is then the process of using algorithms to order that content.
This is the magic of social media, the thing that differentiates it from older forms of media. There is no editor dictating the frontpage headline millions will read on Facebook. Instead, there are billions of front pages, each personalized to our individual tastes and preferences, and each reflecting our unique network of friends, Pages, and Groups.
Personalization is at the heart of the internet’s evolution over the last two decades. From searching on Google, to shopping on Amazon, to watching films on Netflix, a key feature of the internet is that it allows for a rich feedback loop in which our preferences and behaviors shape the service that is provided to us. It means you get the most relevant information and therefore the most meaningful experience. Imagine if, instead of presenting recommendations based on things you’ve watched, Netflix simply listed the thousands upon thousands of movies and shows in order of those most watched. Where would you even start?
When you think of how you experience Facebook, what you probably think of first is what you see on your News Feed. This is essentially Facebook’s front page, personalized to you: the vertical display of text, images, and videos that you scroll down once you open the Facebook app on your phone or log into facebook.com on your computer. The average person has thousands of posts they potentially could see at any given time, so to help you find the content you’ll find most meaningful or relevant, we use a process called ranking, which orders the posts in your Feed, putting the things we think you will find most meaningful closest to the top. The idea is that this results in content from your best friend being placed high in your Feed, while content from an acquaintance you met several years ago will often be much lower down.
Every piece of content that could potentially feature — including the posts you haven’t seen from your friends, the Pages you follow, and Groups you joined — goes through the ranking process. Thousands of signals are assessed for these posts, like who posted it, when, whether it’s a photo, video or link, how popular it is on the platform, or the type of device you are using. From there, the algorithm uses these signals to predict how likely it is to be relevant and meaningful to you: for example, how likely you might be to “like” it or find that viewing it was worth your time. The goal is to make sure you see what you find most meaningful — not to keep you glued to your smartphone for hours on end. You can think about this sort of like a spam filter in your inbox: it helps filter out content you won’t find meaningful or relevant, and prioritizes content you will.
Before we credit “the algorithm” with too much independent judgment, it is of course the case that these systems are designed by people. It is Facebook’s decision makers who ultimately decide what content is acceptable on the platform. Facebook has detailed Community Standards, developed over many years, that prohibit harmful content — and invests heavily in developing ways of identifying it and acting on it quickly.
Of course, whether Facebook draws the line in the right place, or according to the right considerations, is a matter of legitimate public debate. And it is entirely reasonable to argue that private companies shouldn’t be making so many big decisions about what content is acceptable on their own. It would clearly be better if these decisions were made according to frameworks agreed by democratically accountable lawmakers. But in the absence of such laws, there are decisions that need to be made in real time.
Last year, Facebook established an Oversight Board to make the final call on some of these difficult decisions. It is an independent body and its decisions are binding — they can’t be overruled by Mark Zuckerberg or anyone else at Facebook. Indeed, at the time of writing the Board has already overturned a majority of Facebook’s decisions referred to it. The board itself is made up of experts and civic leaders from around the world with a wide range of backgrounds and perspectives, and they began issuing judgments and recommendations earlier this year. The board is currently considering Facebook’s decision to indefinitely suspend former U.S. President Donald Trump in the wake of his inciting comments which contributed to the horrendous scenes at the Capitol.
Other types of problematic content are addressed more directly through the ranking process. For example, there are types of content that might not violate Facebook’s Community Standards but are still problematic because users say they don’t like them. For these, Facebook reduces their distribution, as it does for posts deemed false by one of the more than 80 independent fact checking organizations that evaluate Facebook content. In other words, how likely a post is to be relevant and meaningful to you acts as a positive in the ranking process, and indicators that the post may be problematic (but non-violating) act as a negative. The posts with the highest scores after that are placed closest to the top of your Feed.
This sifting and ranking process results in a News Feed that is unique to you, like a fingerprint. But of course, you don’t see the algorithm at work, and you have limited insight into why and how the content that appears was selected and what, if anything, you could do to alter it. And it is in this gap in understanding that assumptions, half-truths, and misrepresentations about how Facebook works can take root.
Where Does Facebook’s Incentive Lie?
Central to many of the charges by Facebook’s critics is the idea that its algorithmic systems actively encourage the sharing of sensational content and are designed to keep people scrolling endlessly. Of course, on a platform built around people sharing things they are interested in or moved by, content that provokes strong emotions is invariably going to be shared. At one level, the fact that people respond to sensational content isn’t new. As generations of newspaper sub-editors can attest, emotive language and arresting imagery grab people’s attention and engage them. It’s human nature. But Facebook’s systems are not designed to reward provocative content. In fact, key parts of those systems are designed to do just the opposite.
Facebook reduces the distribution of many types of content — meaning that content appears lower in your News Feed — because they are sensational, misleading, gratuitously solicit engagement, or are found to be false by our independent fact checking partners. For example, Facebook demotes clickbait (headlines that are misleading or exaggerated), highly sensational health claims (like those promoting “miracle cures”), and engagement bait (posts that explicitly seek to get users to engage with them).
Facebook’s approach goes beyond addressing sensational and misleading content post-by-post. When Pages and Groups repeatedly post some of these types of content to Facebook, like clickbait or misinformation, Facebook reduces the distribution of all the posts from those Pages and Groups. And where websites generate an extremely disproportionate amount of their traffic from Facebook relative to the rest of the internet, which can be indicative of a pattern of posting more sensational or spammy content, Facebook likewise demotes all the posts from the Pages run by those websites.
Facebook has also adjusted other aspects of its approach to ranking, including fundamental aspects, in ways that would be likely to devalue sensational content. Since the early days of the platform, the company has relied on explicit engagement metrics — whether people “liked,” commented on, or shared a post — to determine which posts they would find most relevant. But the use of those metrics has evolved and other signals Facebook considers have expanded.
In 2018, Mark Zuckerberg announced that his product teams would focus not only on serving people the most relevant content but also on helping them have more meaningful social interactions — primarily by promoting content from friends, family, and groups they are part of over content from Pages they follow. The effect was to change ranking such that explicit engagement metrics would still play a prominent role in filtering the posts likely to be most relevant to you, but now with an extra layer of assessing which of those potentially relevant posts was also likely to be meaningful to you. In doing this, he recognized explicitly that this shift would lead to people spending less time on Facebook because Pages — where media entities, sports teams, politicians, and celebrities among others tend to have a presence — generally post more engaging though less meaningful content than, say, your mum or dad. The prediction proved correct, as the change led to a decrease of 50 million hours’ worth of time spent on Facebook per day, and prompted a loss of billions of dollars in the company’s market cap.
This shift was part of an evolution of Facebook’s approach to ranking content. The company has since diversified its approach, finding new ways to determine what content people find most meaningful, including asking them directly and then incorporating those responses into the ranking process. For example, Facebook uses surveys to learn which posts people feel are worth their time and then prioritizes posts predicted to fit that bill. Surveys are also used to better understand how meaningful different friends, Pages, and Groups are to people, and ranking algorithms are updated based on the responses. This approach gives a more complete picture of the types of posts people find most meaningful, assessing their experience outside the immediate reaction, including the immediate pull of any sensational content.
Facebook is also in the relatively early stages of exploring whether and how to rank some important categories of content differently — like news, politics, or health — in order to make it easier to find posts that are valuable and informative. And last month it was announced that it is considering new steps to reduce the amount of political content — where sensationalism is no stranger — in News Feed in response to strong feedback from users that they want to see less of it overall. This follows Facebook’s recent decision to stop recommending civic and political groups in the US, which is now being expanded globally.
This evolution also applies to the Groups people join around shared interests or experiences. Facebook has taken significant steps to make these spaces safer, including restricting or removing members or groups that violate its Community Standards.
Facebook recognizes there are times when it is in the wider interest of society for authoritative information about topical issues to be prioritized in your News Feed. But just as messages from doctors telling us to eat our vegetables or dentists reminding us to floss will never be as engaging as celebrity gossip or political punditry, Facebook understands that it needs to supplement the ranking process to help more people find authoritative information. Last year, it did just that, helping people find accurate, up-to-date information around both Covid-19 and the U.S. elections. In both cases Facebook created information hubs with links and resources from official sources, and promoted these at the top of people’s News Feeds. Both had huge reach — more than 600 million people clicked through to credible sources of information about Covid-19 via Facebook and Instagram, and an estimated 4.5 million Americans were helped to register to vote.
The reality is, it’s not in Facebook’s interest — financially or reputationally — to continually turn up the temperature and push users towards ever more extreme content.
The company’s long-term growth will be best served if people continue to use its products for years to come. If it prioritized keeping you online an extra 10 or 20 minutes, but in doing so made you less likely to return in the future, it would be self-defeating. And bear in mind, the vast majority of Facebook’s revenue comes from advertising. Advertisers don’t want their brands and products displayed next to extreme or hateful content — a point that many made explicitly last summer during a high-profile boycott by a number of household-name brands. Even though troubling content is a small proportion of the total (hate speech is viewed 7 or 8 times for every 10,000 views of content on Facebook), the protest showed that Facebook’s financial self-interest is to reduce it, and certainly not to encourage it or optimize for it.
But even if you agree that Facebook’s incentives do not support the deliberate promotion of extreme content, there is nonetheless a widespread perception that political and social polarization, especially in the United States, has grown because of the influence of social media. This has been the subject of swathes of serious academic research in recent years — the results of which are in truth mixed, with many studies suggesting that social media is not the primary driver of polarization after all, and that evidence of the ‘filter bubble’ effect is thin at best.
Research from Stanford last year looked in depth at trends in nine countries over 40 years, and found that in some countries polarization was on the rise before Facebook even existed, and in others it has been decreasing while internet and Facebook use increased. Other credible recent studies have found that polarization in the United States has increased the most among the demographic groups least likely to use the internet and social media, and data published in the EU suggests that levels of ideological polarization are similar whether you get your news from social media or elsewhere.
A Harvard study ahead of the 2020 U.S. election found that election-related disinformation was primarily driven by elite and mass-media, not least cable news, and suggested that social media played only a secondary role. And research from both Pew in 2019 and the Reuters Institute in 2017 showed that you’re likely to encounter a more diverse set of opinions and ideas using social media than if you only engage with other types of media.
An earlier Stanford study showed that deactivating Facebook for four weeks before the 2018 US elections reduced polarization on political issues but also led to a reduction of people’s news knowledge and attention to politics. However, it did not significantly lessen so-called “affective polarization,” which is a measure of someone’s negative feelings about the opposite party.
What evidence there is simply does not support the idea that social media, or the filter bubbles it supposedly creates, are the unambiguous driver of polarization that many assert. One thing we do know is that political content is only a small fraction of the content people consume on Facebook — our own analysis suggests that in the U.S. it is as little as 6%. Last year, Halloween had twice the increase in posting we saw on Election Day — and that’s despite the fact that Facebook prompted people at the top of their News Feed to post about voting.
How to Train Your Algorithm
Unlike the relationship between the couple cooking dinner — one shopping for ingredients, the other cooking — where both sides have a meaningful understanding of what they are putting in and getting out, the relationship between a user and an algorithm isn’t as transparent.
That needs to change. You should be able to better understand how the ranking algorithms work and why they make particular decisions, and you should have more control over the content that is shown to you. You should be able to talk back to the algorithm and consciously adjust or ignore the predictions it makes — to alter your personal algorithm in the cold light of day, through breathing spaces built into the design of the platform.
To better put this into practice, Facebook has launched a suite of product changes to help you more easily identify and engage with the friends and Pages you care most about. And it’s placing a new emphasis not just on creating such tools, but on ensuring that they’re easy to find and to use.
A new product called Favorites, which improves on the previous See First control, allows you to see the top friends and Pages that Facebook predicts are the most meaningful to you — and, importantly, you can adopt those suggestions or simply add other friends and Pages if you want. Posts from people or Pages that you manually select will then be boosted in your News Feed and marked with a star. Such posts will also populate a new Favorites feed, an alternative to the standard News Feed.
For some time, it has been possible to view your News Feed chronologically, so that the most recent posts appear highest up. This turns off algorithmic ranking, something that should be of comfort to those who mistrust Facebook’s algorithms playing a role in what they see. But this feature hasn’t always been easy to find. So Facebook is introducing a new “Feed Filter Bar” to make toggling between this Most Recent feed, the standard News Feed, and the new Favorites feed easier.
Similarly, for some time Facebook has tried to increase transparency around why a particular ad has appeared in your News Feed through the “Why Am I Seeing This?” tool, which you can see by clicking on the three dots in the top right corner of an ad. This was extended to most posts in your News Feed in 2019, and today it’s available for some suggested posts too, so you can better understand why those cookery videos or movie news articles keep appearing as you scroll.
These measures are part of a significant shift in the company’s thinking about how it gives people greater understanding of, and control over, how its algorithms rank content, and how it can at the same time utilize content ranking and distribution to ensure the platform has a positive impact on society as a whole.
Other measures coming this year include providing more transparency about how the distribution of problematic content is reduced; making it easier to understand what content is popular in News Feed; launching more surveys to better understand how people feel about the interactions they have on Facebook and transparently adjusting our ranking algorithms based on the results; publishing more of the signals and predictions that guide the News Feed ranking process; and connecting people with authoritative information in more areas where there is a clear societal benefit, like the climate science and racial justice hubs. More changes are planned over the course of the year.
Making Peace with the Machines
Putting more choices into the people’s hands is not a panacea for all the problems that can occur on an open social platform like Facebook. Many can and do choose sensational and polarizing content over alternatives.
Social media lets people discuss, share, and criticize freely and at scale, without the boundaries or mediation previously imposed by the gatekeepers of the traditional media industry. For hundreds of millions of people, it is the first time that they have been able to speak freely and be heard in this way, with no barrier to entry apart from an internet connection. People don’t just have a video camera in their pocket — with social media, they also have the means to distribute what they see.
This is a dramatic and historic democratization of speech. And like any democratizing force, it challenges existing power structures. Political and cultural elites are confronting a raucous online conversation that they can’t control, and many are understandably anxious about it.
Wherever possible, I believe that people should be able to choose for themselves, and that people can generally be trusted to know what is best for them. But I am also acutely conscious that we need collectively-agreed ground rules, both on social media platforms and in society at large, to reduce the likelihood that the choices exercised freely by individuals will lead to collective harms. Politics is in large part a conversation about how we define those ground rules in a way that enjoys the widest possible legitimacy, and the challenge that social media now faces is, for better or worse, inherently political.
Should a private company be intervening to shape the ideas that flow across its systems, above and beyond the prevention of serious harms like incitement to violence and harassment? If so, who should make that decision? Should it be determined by an independent group of experts? Should governments set out what kinds of conversation citizens are allowed to participate in? Is there a way in which a deeply polarized society like the U.S. could ever agree on what a healthy national conversation looks like? How do we account for the fact that the internet is borderless and speech rules will need to accommodate a multiplicity of cultural perspectives?
These are profound questions — and ones that shouldn’t be left to technology companies to answer on their own.
Promoting individual agency is the easy bit. Identifying content which is harmful and keeping it off the internet is challenging, but doable. But agreeing on what constitutes the collective good is very hard indeed. A case in point is the decision Facebook took to suspend former President Trump from the platform. Many welcomed the decision — indeed, many argued strongly that it was about time that Facebook and others took such decisive action. It is a decision that I absolutely believe was right. But it was also perhaps the most dramatic example of the power of technology in public discourse, and it has provoked legitimate questions about the balance of responsibility between private companies and public and political authorities.
Whether governments now choose to tighten the terms of online debate or private companies choose to do so themselves, we should remain wary of the conclusion that the answer to these dilemmas is always less speech. While we shouldn’t assume that perfect freedom leads to perfect outcomes, nor should we assume that extending freedom of speech will lead to a degradation of society. Implicit in the arguments made by many of social media’s critics is an assumption that people can’t be trusted with an extensive right to free speech; or that this freedom is an illusion and that their minds are really being controlled by the algorithm and the sinister intentions of its Big Tech masters.
Perhaps it is time to acknowledge it is not simply the fault of faceless machines? Consider, for example, the presence of bad and polarizing content on private messaging apps — iMessage, Signal, Telegram, WhatsApp — used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way. In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror, and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.
The truth is machines have not taken over, but they are here to stay. We need to make our peace with them.
A better understanding of the relationship between the user and the algorithm is in everyone’s interest. People need to have confidence in the systems that are so integral to modern life. The internet needs new rules for the road that can command broad public consent. And tech companies need to know the parameters within which society is comfortable for them to operate, so that they have permission to continue to innovate. That starts with openness and transparency, and with giving you more control.