338
Eleven
Dictatorship of the Like
19
One
Trapped in the Casino
22
Social media platforms surfaced whatever content their automated systems had concluded would maximize users’ activity online, thereby allowing the company to sell more ads.
22
Facebook wasn’t just indulging anti-vaccine extremists. It was creating them.
29
Facebook’s homepage to show each user a personalized feed of what their friends were up to on the site. Until then, you had to check each profile or group manually for any activity. Now, if one friend changed her relationship status, another posted about bad pizza in the cafeteria, and another signed up for an event, all of that would be reported on your homepage.
That stream of updates had a name: the news feed.
30
In reality, only a minority of users ever joined. But proliferating updates made them look like an overwhelming majority. And the news feed rendered each lazy click of the “join” button as an impassioned shout: “Against News Feed” or “I HATE FACEBOOK.” The appearance of widespread anger, therefore, was an illusion. But human instincts to conform run deep. When people think something has become a matter of consensus, psychologists have found, they tend not only to go along, but to internalize that sentiment as their own.
31
That digital amplification had tricked Facebook’s users, and even its leadership, into misperceiving the platform’s loudest voices as representing everyone, growing a flicker of anger into a wildfire. But, crucially, it had also done something else: driven engagement up.
31
Facebook soon allowed anyone to register for the site. User growth rates, which had barely budged during the prior expansion round, exploded by 600 or 700 percent. The average amount of time each person spent online grew rapidly, too. Just thirteen months later, in the fall of 2007, the company was valued at $15 billion.
31
When the news feed launched in 2006, 11 percent of Americans were on social media. Between 2 and 4 percent used Facebook. Less than a decade later, in 2014, nearly two thirds of Americans used social networking, among whom Facebook, YouTube, and Twitter were near-universal. That year, halfway through the second Obama term, a significant threshold was crossed in the human experience. For the first time, the 200 million Americans with an active Facebook account spent, on average, more time on the platform (forty minutes per day) than they did socializing in person (thirty-eight minutes). Just two years later, by the summer of 2016, nearly 70 percent of Americans used Facebook-owned platforms, averaging fifty minutes per day.
35
intermittent variable reinforcement.
36
Social media does the same. Posting on Twitter might yield a big social payoff, in the form of likes, retweets, and replies. Or it might yield no reward at all.
36
abusive relationships. Abusers veer unpredictably between kindness and cruelty, punishing partners for behaviors that they had previously rewarded with affection. This can lead to something called traumatic bonding. The victimized partner finds herself compulsively seeking a positive response, like a gambler feeding a slot machine,
36
Further, while posting to social media can feel like a genuine interaction between you and an audience, there is one crucial, invisible difference. Online, the platform acts as unseen intermediary. It decides which of your comments to distribute to whom, and in what context. Your next post might get shown to people who will love it and applaud, or to people who will hate it and heckle, or to neither. You’ll never know because its decisions are invisible. All you know is that you hear cheers, boos, or crickets.
37
Unlike slot machines, which are rarely at hand in our day-to-day lives, social media apps are some of the most easily accessible products on earth. It’s a casino that fits in your pocket, which is how we slowly train ourselves to answer any dip in our happiness with a pull at the most ubiquitous slot machine in history. The average American checks their smartphone 150 times per day, often to open social media. We don’t do this because compulsively checking social media apps makes us happy. In 2018, a team of economists offered users different amounts of money to deactivate their account for four weeks, looking for the threshold at which at least half of them would say yes. The number turned out to be high: $180. But the people who deactivated experienced more happiness, less anxiety, and greater life satisfaction. After the experiment was over, they used the app less than they had before.
37
Why had these subjects been so resistant to give up a product that made them unhappy? Their behavior, the economists wrote, was “consistent with standard habit formation models”—i.e., with addiction—leading to “sub-optimal consumption choices.” A clinical way of saying the subjects had been trained to act against their own interests.
38
In early 2009, a product manager named Leah Pearlman, who’d worked on the feature since shortly after joining Facebook at age twenty-three, published a post announcing it as “an easy way to tell friends that you like what they’re sharing on Facebook with one easy click.” Traffic surged immediately, well beyond internal expectations.
38
That little button’s appeal, and much of social media’s power, comes from exploiting something called the sociometer. The concept emerged out of a question posed by the psychologist Mark Leary: what is the purpose of self-esteem? The anguish we feel from low self-esteem is wholly self-generated. We would not have developed such an unusual and painful vulnerability, Leary reasoned, unless it provided some benefit outweighing its tremendous psychic costs. His theory, now widely held, is that self-esteem is in fact “a psychological gauge of the degree to which people perceive that they are relationally valued and socially accepted by other people.”
38
Human beings are some of the most complex social animals on earth. We evolved to live in leaderless collectives far larger than those of our fellow primates: up to about 150 members. As individuals, our ability to thrive depended on how well we navigated those 149 relationships—not to mention all of our peers’ relationships with one another. If the group valued us, we could count on support, resources, and probably a mate. If it didn’t, we might get none of those. It was a matter of survival, physically and genetically.
39
Over millions of years, those pressures selected for people who are sensitive to and skilled at maximizing their standing. It’s what the anthropologist Brian Hare called “survival of the friendliest.” The result was the development of a sociometer: a tendency to unconsciously monitor how other people in our community seem to perceive us. We process that information in the form of self-esteem and such related emotions as pride, shame, or insecurity. These emotions compel us to do more of what makes our community value us and less of what doesn’t. And, crucially, they are meant to make that motivation feel like it is coming from within. If we realized, on a conscious level, that we were responding to social pressure, our performance might come off as grudging or cynical, making it less persuasive.
39
Facebook’s “Like” feature, some version of which now exists on every platform, is the equivalent of a car battery hooked up to that sociometer. It gives whoever controls the electric jolts tremendous power over our behavior. It’s not just that “likes” provide the social validation we spend so much of our energy pursuing; it’s that they offer it at an immediacy and scale heretofore unknown in the human experience. Off-line, explicit validation is relatively infrequent. Even rarer is hearing it announced publicly, which is the most powerful form of approval because it conveys our value to the broader community. When’s the last time fifty, sixty, seventy people publicly applauded you off-line? Maybe once every few years—if ever? On social media, it’s a normal morning.
40
Further, the platforms added a powerful twist: a counter at the bottom of each post indicating the number of likes, retweets, or upvotes it had received—a running quantification of social approval for each and every statement.
40
In fact, the incentive is so powerful that it even shows up on brain scans. When we receive a Like, neural activity flares in a part of the brain called the nucleus accumbens:
40
Subjects with smaller nucleus accumbens—a trait associated with addictive tendencies—use Facebook for longer stretches. And when heavy Facebook users get a Like, that gray matter displays more activity than in lighter users, as in gambling addicts who’ve been conditioned to exalt in every pull of the lever.
41
For most of us, the process is subtler. Instead of buying Facebook ads, we modify our day-to-day posts and comments to keep the dopamine coming, usually without realizing we have done it.
42
the single most powerful force on social media is identity.
42
Our sense of self derives largely from our membership in groups.
43
social identity theory. They traced its origins back to a formative challenge of early human existence. Many primates live in cliques. Humans, in contrast, arose in large collectives, where family kinship was not enough to bind mostly unrelated group members. The dilemma was that the group could not survive without each member contributing to the whole, and no one individual, in turn, could survive without support from the group.
43
Social identity, Tajfel demonstrated, is how we bond ourselves to the group and they to us. It’s why we feel compelled to hang a flag in front of our house, don an alma mater T-shirt, slap a bumper sticker on our car. It tells the group that we value our affiliation as an extension of ourselves and can therefore be trusted to serve its common good.
43
Our drive to cultivate a shared identity is so powerful that we’ll construct one even out of nothing. In one experiment, researchers assigned volunteers one of two labels by a simple coin toss, then had them play a game. Each showed greater generosity to others with the same label, even though they knew the division was meaningless. The same behavior has emerged in dozens of experiments and real-world situations, with people consistently embracing any excuse to divide between “us” and “them”—and showing distrust, even hostility, toward those in the out-group. During lunch breaks on the set of the 1968 movie Planet of the Apes, for instance, extras spontaneously separated into tables according to whether they played chimpanzees or gorillas. For years afterward, Charlton Heston, the film’s star, recounted the “instinctive segregation” as “quite spooky.” When the sequel filmed, a different set of extras repeated the behavior exactly.
44
Prejudice and hostility have always animated this instinct. Hunter-gatherer tribes sometimes competed for resources or territory. One group’s survival might require the defeat of another. Because of that, social-identity instincts drive us to distrust and, if necessary, rally against out-group members. Our minds compel those behaviors by sparking two emotions in particular: fear and hate. Both are more social than you might think. Fear of a physical threat from without causes us to feel a greater sense of camaraderie with our in-group, as if rushing to our tribe for safety. It also makes us more distrustful of, and more willing to harm, people whom we perceive as different. Think of the response to the September 11 attacks: a tide of patriotic flag-waving fervor and an alignment of fellow feeling, but one that was also followed by a spike in anti-Muslim hate crimes.
44
These are deeply social instincts, so social media platforms, by turning every tap or swipe into a social act, reliably surface them. And because the platforms elevate whatever sentiments best win engagement, they often produce those instincts in their most extreme form. The result can be an artificial reality in which the in-group is always virtuous but besieged, the out-group is always a terrifying threat, and virtually everything that happens is a matter of us-versus-them.
45
Social media’s indulgence of identity wasn’t obviously harmful at first. But it was always well known. In 2012, a left-wing activist raised money from co-founders of Facebook and Reddit to start Upworthy, which produced content tailored to spread on social media.
45
one formula proved especially effective: headlines promising to portray the user’s implied in-group (liberals, usually) as humiliating a reviled out-group (creationists, corporations, racists). “A Man Slams a Bigoted Question So Hard He Brings Down the House.”
45
organizations rose or reorganized around chasing virality. BuzzFeed became an internet giant on list-based articles indulging users’ desire for social-identity affirmation: “28 Signs You Were Raised by Irish Parents” or “31 Things Only People from a Small Town Will Understand.”
46
In 2014, I was one of several Washington Post reporters to start Vox, a news site intended to leverage the web. We never shaped our journalism to please social media algorithms—at least, not consciously—but headlines were devised with them in mind. The most effective approach, though one that in retrospect we should have perhaps been warier of using, was identity conflict. Liberals versus conservatives. The righteousness of anti-racism. The outrageousness of lax gun laws.
46
Often, that meant hyperpartisan provocateurs, for-profit click farms, outright scammers. Unconstrained by any fealty to fairness, accuracy, or the greater good, they ran up huge audiences by indulging, or provoking, identity conflicts.
52
Two
Everything Is Gamergate
83
It emerged out of a crisis that the company faced in 2008, the sort that focused the company’s attention like few others: user growth had stalled. In any other industry, capping out around 90 million customers might be an opportunity to explore new or better products to sell them. But in the web economy, a static userbase could be deadly. “I remember people saying it’s not clear if it was ever going to get past a hundred million at that time,” Zuckerberg has said. “We basically hit a wall and we needed to focus on that.”
83
Facebook, in the hopes of boosting engagement, began experimenting with breaking the so-called Dunbar limit. The British anthropologist Robin Dunbar had proposed, in the 1990s, that humans are cognitively capped at maintaining about 150 relationships. It was a number derived from the maximum-150-person social groups in which we’d evolved. Any more than that and our neocortex—the part of our brain governing social cognition—maxes out. Our behavior changes, too, seeking to reset back to 150, like a circuit breaker tripping. Even online, people converged naturally on Dunbar’s number. In 2010, the average Facebook user had about 130 friends; the social-network game Friendster even capped the number of friends at 150.
84
studies of rhesus monkeys and macaques, whose Dunbar-like limits are thought to mirror our own, had found that pushing them into larger groups made them more aggressive, more distrusting, and more violent. It was as if all the dangers of living in a community got amped up and the pleasures reduced. The monkeys seemed to sense that safely navigating an unnaturally large group was beyond their abilities, triggering a social fight-or-flight response that never quite turned off. They also seemed to become more focused on forming and enforcing social hierarchies, likely as a kind of defense mechanism.
84
Facebook soon found an even more powerful method for expanding users’ communities. Rather than strain to expand your friends list beyond 150 people, Facebook could push you into groups—stand-alone discussion pages focused on some topic or interest—ten times that size. This also shifted yet more power to Facebook’s systems. No longer limited to content from people near your social circle, the system could nudge you into groups from anywhere on the platform.
85
“There’s this conspiracy-correlation effect,” DiResta said, “in which the platform recognizes that somebody who’s interested in conspiracy A is typically likely to be interested in conspiracy B, and pops it up to them.” Facebook’s groups era promoted something more specific than passive consumption of conspiracies. Simply reading about contrails or lab-made viruses might fill twenty minutes. But joining a community organized around fighting back could become a daily ritual for months or years.
85
Each time a user succumbed, they trained the system to nudge others to do the same. “If they bite,” DiResta said, “then they’ve reinforced that learning. Then the algorithm will take that reinforcement and increase the weighting.”
86
Others from DiResta’s informal group of social media watchers were noticing Facebook and other platforms routing them in similar ways. The same pattern played out over and over, as if those A.I.s had all independently arrived at some common, terrible truth about human nature. “I called it radicalization via the recommendation engine,” she said. “By having engagement-driven metrics, you created a world in which rage-filled content would become the norm.”
86
The algorithmic logic was sound, even brilliant. Radicalization is an obsessive, life-consuming process. Believers come back again and again, their obsession becoming an identity, with social media platforms the center of their day-to-day lives. And radicals, driven by the urgency of their cause, recruit other radicals. “We had built an outrage machine in which people actually participated in pushing the content along,” DiResta said, where the people who became radicalized were thereafter “the disseminators of that content.”
86
“Ordinary people began to feel like they were like soldiers in an online army fighting for their cause,” she said. It was only a matter of time until they willed one another to action.
88
Three
Opening the Portal
88
88
voted “up” if they liked it and “down” if they didn’t. The most-upvoted links appeared at the top of the page, where millions would see them. A comment section attached to each post operated by the same rules. Dropping into a conversation, you’d see crowd-pleasing statements first and unpopular comments not at all.
89
Its up-or-down voting enforced an eclipsing majoritarianism that pushed things even further. So did a dynamic similar to Facebook’s likes: upvote counts are publicly displayed, tapping into users’ sociometer-driven impulse for validation. The dopamine-chase glued users to the site and, as on Facebook, steered their actions.
102
As with Gamergate, it wasn’t just that the content was salacious. It promoted feelings of group-identity threat—the Muslims are coming to kill us, the Jews will erase our culture, the liberals want to destroy traditional values—which activated the tribal-defense instinct of users that Henri Tajfel and his team of social psychologists had identified decades earlier. It spurred them to share ever more links and comments tightening their in-group identity and rallying against the common enemy. It kept them clicking and posting, whipping other users into the same shared frenzy, an endless feedback loop of fear and rage that proved enormously beneficial for Silicon Valley and Donald Trump, but disastrous for everybody else.
103
Across platforms, discussion on immigration “gravitated more often to issues of identity threat,” the study’s authors found, naturally privileging far-right worldviews that center, by definition, on fears of identity conflict.
103
The result was a near-universal convergence on these behaviors and ways of thinking, incentivized all along the way by social media. There were moments, Wu admitted, when she found herself tempted to spin up outrage, to rally her followers against some adversary, to push a claim that, while dubious, might flatter the identity and inflame the prejudices of her in-group. She usually caught herself, but not always.
“It’s something I really struggle with, myself, in my own person, the way I interact with the world, because there’s something really dangerous that’s been unlocked here,” she said. “This cycle of aggrievement and resentment and identity, and mob anger, it feels like it’s consuming and poisoning the entire nation.”
101
Yiannopoulos and Bannon have long claimed to be the mergers of internet-troll culture with the mainstream right,
Note: now I know why these sensitive ass people can’t handle being trolled a little and associate it with “4chan” behavior even if it’s not bigoted
230
Eight
Church Bells
230
“Social media plays the role that the ringing of the church bells used to play in the past,” Santamaría said. “That’s the way that people know that a lynching is going to happen.” The platforms, she explained, reproduced certain age-old mechanisms by which a community worked itself into collective violence. Lynching, when a group follows its moral outrage to the point of hurting or killing someone—the tyranny of cousins at work—is a communal impulse. A public show of what happens to those transgressing the tribe.
231
“The aim of it is to communicate,” Santamaría said of lynching. The false rumors that consistently spread in advance of mass violence, she believed, were the tell that social media had learned to reproduce that age-old process. More than merely triggering preexisting sentiment, social media was creating it. The rumors were hardly random. “They have a logic to them,” she said. “They do not target everyone.” Rather, the rumors activated a sense of collective peril in groups that were dominant but felt their status was at risk—majorities angry and fearful over change that threatened to erode their position in the hierarchy. Because the impersonal forces of social change are, for most people, no more defeatable than the weather, social media had stepped in to provide a more corporeal, conquerable villain: feminist bloggers, the religious minority next door, refugees. “This finally is something that you have control over,” Santamaría said. “You can actually do something about it.”
231
In Myanmar, social media platforms indulged the fears of the long-dominant Buddhist majority who felt, with democracy’s arrival, a shift in the status quo that had long privileged them. In India, it was the Hindu majority, on similar grounds. In 2018, BBC reporters in northern Nigeria found the same pattern, the Fulani majority pitted against the Berom minority, all on Facebook. In America, social media had tapped into white backlash against immigration, Black Lives Matter, increased visibility of Muslims, cultural recalibration toward greater tolerance and diversity. The most-shared rumors, Santamaría pointed out, often had to do with reproduction or population. Sri Lanka and sterilization pills. America and a liberal plot to replace white people with refugees.
232
The defining element across all these rumors was something more specific and dangerous than generalized outrage: a phenomenon called status threat. When members of a dominant social group feel at risk of losing their position, it can spark a ferocious reaction. They grow nostalgic for a past, real or imagined, when they felt secure in their dominance (“Make America Great Again”). They become hyper-attuned for any change that might seem tied to their position: shifting demographics, evolving social norms, widening minority rights. And they grow obsessed with playing up minorities as dangerous, manifesting stories and rumors to confirm the belief. It’s a kind of collective defense mechanism to preserve dominance. It is mostly unconscious, almost animalistic, and therefore easily manipulated, whether by opportunistic leaders or profit-seeking algorithms.
232
The problem isn’t just that social media learned to promote outrage, fear, and tribal conflict, all sentiments that align with status threat. Online, as we post updates visible to hundreds or thousands of people, charged with the group-based emotions that the platforms encourage, “our group identities are more salient” than our individual ones, as William Brady and Molly Crockett wrote in their paper on social media’s effects. We don’t just become more tribal, we lose our sense of self. It’s an environment, they wrote, “ripe for the psychological state of deindividuation.”
232
The shorthand definition of deindividuation is “mob mentality,” though it is more common than joining a mob. You can deindividuate by sitting in the stands at a sports game or singing along in church, surrendering part of your will to that of the group. The danger comes when these two forces mix: deindividuation, with its power to override individual judgment, and status threat, which can trigger collective aggression on a terrible scale.
240
Hyperactive users like Wassermann tend to be “more opinionated, more extreme, more engaged, more everything,” said Andrew Guess, a Princeton University social scientist.
240
When more casual users open social media, often what they see is a world shaped by superposters. Social media attracts people with certain personality tics that make heavy usage unusually gratifying. Their predominance, in turn, distorts the platforms’ norms and biases.
241
And those defining traits and tics of superposters, mapped out in a series of psychological studies, are broadly negative. One is dogmatism: “relatively unchangeable, unjustified certainty.” Dogmatics tend to be narrow-minded, pushy, and loud. Another: grandiose narcissism, defined by feelings of innate superiority and entitlement. Narcissists are consumed by cravings for admiration and belonging, which makes social media’s instant feedback and large audiences all but irresistible. That need is deepened by superposters’ unusually low self-esteem, which is exacerbated by the platforms themselves. One study concluded simply, “Online political hostility is committed by individuals who are predisposed to be hostile in all contexts.” Neurological experiments confirmed this: superposters are drawn toward and feel rewarded by negative social potency, a clinical term for deriving pleasure from deliberately inflicting emotional distress on others. Further, by using social media more, and by being rewarded for this with greater reach, superposters pull the platforms toward these defining tendencies of dogmatism, narcissism, aggrandizement, and cruelty.
Note: I think you see these characteristics in tyrants. Need of certainty due to anxiety, need for praise and recognition, being cruel gives them a sense of power and revenge
242
In truth, studies find over and over, our sense of right or wrong is heavily, if unconsciously, influenced by what we believe our peers think: morality by tribal consensus, guided not by some better angel or higher power but by self-preserving deference to the tyranny of cousins.
242
In an experiment in rural Mexico, researchers produced an audio soap opera whose story discouraged domestic violence against women. In some areas, people had the soap played for them privately in their homes. In others, it was broadcast on village loudspeakers or at community meetings. Men who listened at home were just as prone to domestic violence as they had been before. But men who listened in group settings became significantly less likely to commit abuse. And not out of perceived pressure. Their internal beliefs had shifted, growing morally opposed to domestic violence and supportive of gender equality. The difference was in seeing their peers absorb the soap opera. The conformity impulse—the same one that had led Facebook’s first users to trick themselves into fuming over the news feed—can soak all the way to the moral marrow of your innermost self.
243
Most of the time, deducing our peers’ moral views is not so easy. So we use a shortcut. We pay special attention to a handful of peers whom we consider to be influential, take our cues from them, and assume this will reflect the norms of the group as a whole. The people we pick as moral benchmarks are known as “social referents.” In this way, morality is “a sort of perceptual task,” Paluck said. “Who in our group is actually popping out to us? Who do we recruit in our memories when we think about what’s common, what’s desirable?”
243
online, our social referents, the people artificially pushed into our moral fields of vision, are the superposters. Not because they are persuasive, thoughtful, or important, but because they drive engagement. That was something unique to platforms like Facebook, Paluck said. Anyone who got a lot of time on the feed became influential. “In real life, some people might talk a lot but not be the most listened to. But Facebook,” she said, “puts them in front of you every time.”
244
And social media doesn’t just surround you with superposters. It displays their messages on vast, public forums,
244
gave sitewide users the impression that social norms
244
than they really were.
254
The YouTube Riot
105
With users posting more than a thousand links every day, rising to the page’s top slots usually required a description designed to provoke an intense emotion.
106
But Reddit comment sections are sorted by each comment’s popularity. And it was expressions of outrage that rose to the top:
107
Users validate one another’s posts by tapping like or retweet, in the process also surfacing or suppressing whatever content most appeals to their collective will.
108
When platforms become a consensus chant of “Get him,” action usually follows.
109
“We enjoy being outraged. We respond to it as a reward.”
The platforms had learned to indulge the outrage that brought their users “a rush—of purpose, of moral clarity, of social solidarity.”
109
The growing pace of these all-consuming meltdowns, perhaps one a week, indicated that social media was not just influencing the broader culture, but, to some extent, supplanting it, to the ultimate benefit of—and this was an outlandish argument at the time—Donald Trump.
🤦♂️
111
Recall those early tribes of up to 150 people. To survive, the group had to ensure that everyone acted in the collective interest, part of which was getting along with one another. That required a shared code of behavior. But how do you get everyone to internalize, and to follow, that code? Moral outrage is our species’ adaptation for this challenge. When you see someone violating an important norm, you get angry. You want to see them punished. And you feel compelled to broadcast that anger, so that others will also see the violation and want to join in shaming, and perhaps punishing, the transgressor.
111
The desire to punish violators runs so deep that it even shows up in infants. In a set of experiments, children less than a year old were shown two puppets. One puppet shared, the other refused to. The infants consistently took candy away from the bad puppet and rewarded the good. Test subjects just a year or two older would even reward puppets who were cruel to the bad puppet and punish puppets who were nice to it. It was confirmation that moral outrage is not just anger against a transgressor. It is a desire to see the entire community line up against them.
111
Brady,
111
sentimentalism. “It’s this idea that our sense of morality is intertwined with, and maybe even driven by, our emotional responses,” he said. “Which is against this older idea that humans are very rational when it comes to morality.”
112
Popular culture often portrays morality as emerging from our most high-minded selves: the better angels of our nature, the enlightened mind. Sentimentalism says it is actually motivated by social impulses like conformity and reputation management (remember the sociometer?), which we experience as emotion. Neurological research supports this. As people faced with moral dilemmas work out how to respond, they exhibit heavy activity in neural regions associated with emotions. And the emotional brain works fast, often resolving to a decision before conscious reason even has a chance to kick in. Only when they were asked to explain their choice would research subjects activate the parts of their brain responsible for rational calculation, which they used, retroactively, to justify whatever emotion-driven action they’d already decided on.
112
Those moral-emotional choices seemed reliably to serve a social purpose, like seeking peers’ approval, rewarding a Good Samaritan, or punishing a transgressor. But the instinctual nature of that behavior leaves it open to manipulation. Which is exactly what despots, extremists, and propagandists have learned to do, rallying people to their side by triggering outrage—often at some scapegoat or imagined wrongdoer. What would happen when, inevitably, social platforms learned to do the same?
113
She acknowledged to herself, for the first time, how much of her social media experience was organized around provoking or participating in public-shaming campaigns. Some lasted weeks and others a few minutes. Some were in service of an important cause, or so she told herself, others merely because someone had pissed her off. “I used to single out sexist tweets that were sent my way. I would just retweet it, and let my internet followers handle it,” she said. “I don’t do that anymore because it’s just asking for somebody to get harassed.”
115
A New York Times Magazine article suggested that Sacco and others like her had been made to suffer for our amusement, or simply because we had lost control, “as if shamings were now happening for their own sake, as if they were following a script.”
119
Many of these incidents had a left-wing valence to them, leading to fears of a “cancel culture” run amok. But this merely reflected the concentration of left-leaning users in academic, literary, journalistic, and other spaces that tend to be more visible in American life. The same pattern was also unfolding in right-leaning communities. But most such instances were dismissed as the work of fringe weirdos (Gamergate, anti-vaxxers) or extremists (incels, the alt right). Right or left, the common variable was always social media, the incentives it imposes, the behavior it elicits.
119
The consequences extended beyond handfuls of people targeted by arguably misplaced or disproportionate anger. Public life itself was becoming more fiercely tribal, more extreme, more centered on hating and punishing the slightest transgression. “I’m telling you, these platforms are not designed for thoughtful conversation,” Wu said. “Twitter, and Facebook, and social media platforms are designed for: ‘We’re right. They’re wrong. Let’s put this person down really fast and really hard.’ And it just amplifies every division we have.”
Note: this is why I’m skeptical of social media even being a viable place to share ideas and perspectives. Do you really want to appeal to the type of people who are prone to use it? It also makes me question the type of people who use it as a platform and their true goals
122
All great apes despise bullies. Chimpanzees, for instance, show preferential treatment toward peers who are kind to them and disfavor those who are cruel. But they have no way of sharing that information with one another. Bullies never suffer from poor reputations because there is, without language, no such thing. That changed when our ancestors developed language sophisticated enough to discuss one another’s behavior. Aggression went from an asset—the means by which alpha males dominated their clan—to a liability that the wider group, tired of being lorded over, could band together to punish.
122
“Language-based conspiracy was the key, because it gave whispering beta males the power to join forces to kill alpha-male bullies,” Wrangham wrote in a pathbreaking 2019 book. Every time an ancient human clan tore down a despotic alpha, they were doing the same thing that Lyudmila Trut did to her foxes: selecting for docility. More cooperative males reproduced, the aggressive ones did not. We self-domesticated.
123
But just as early humans were breeding one form of aggression out, they were selecting another in: the collective violence they’d used both to topple the alphas and to impose a new order in their place. Life became ruled by what the anthropologist Ernest Gellner called “tyranny of the cousins.” Tribes became leaderless, consensus-based societies, held together by fealty to a shared moral code, which the group’s adults (the “cousins”) enforced, at times violently. “To be a nonconformist, to offend community standards, or to gain a reputation for being mean became dangerous adventures,” Wrangham wrote. Upset the collective and you might be shunned or exiled—or wake up to a rock slamming into your forehead. Most hunter-gatherer societies live this way today, suggesting that the practice draws on something intrinsic to our species.
123
The basis of this new order was moral outrage. It was how you alerted your community to misbehavior—how you rallied them, or were yourself rallied, to punish a transgression. And it was the threat that hung over your head from birth until death, keeping you in line. Moral outrage, when it gathers enough momentum, becomes what Wrangham calls “proactive” and “coalitional” aggression—colloquially known as a mob. When you see a mob, you are seeing the cousins’ tyranny, the mechanism of our self-domestication. This threat, often deadly, became an evolutionary pressure in its own right, leading us to develop ultrafine sensitivities to the group’s moral standards—and an instinct to go along. If you want to prove to the group that it can trust you to enforce its standards, pick up a rock and start throwing. Otherwise, you might be next.
124
When you see a post expressing moral outrage, 250,000 years of evolution kick in. It impels you to join in. It makes you forget your internal moral senses and defer to the group’s. And it makes inflicting harm on the target of the outrage feel necessary—even intensely pleasurable. Brain scans find that, when subjects harm someone they believe is a moral wrongdoer, their dopamine-reward centers activate. The platforms also remove many of the checks that normally restrain us from taking things too far. From behind a screen, far from our victims, there is no pang of guilt at seeing pain on the face of someone we’ve harmed. Nor is there shame at realizing that our anger has visibly crossed into cruelty. In the real world, if you scream expletives at someone for wearing a baseball cap in an expensive restaurant, you’ll be shunned yourself, punished for violating norms against excessive displays of anger and for disrupting your fellow restaurant-goers. Online, if others take note of your outburst at all, it will likely be to join in.
125
Social platforms are unnaturally rich with sources of moral outrage; there is always a tweet or news development to get angry about, along with plenty of users to highlight it to a potential audience of millions. It’s like standing in the center of the largest crowd ever assembled, knowing that, at any moment, it might transform into a mob. This creates powerful incentives for what the philosophers Justin Tosi and Brandon Warmke have termed “moral grandstanding”—showing off that you are more outraged, and therefore more moral, than everyone else. “In a quest to impress peers,” Tosi and Warmke write, “grandstanders trump up moral charges, pile on in cases of public shaming, announce that anyone who disagrees with them is obviously wrong, or exaggerate emotional displays.”
125
Off-line, moral grandstanders might heighten a particular group’s sensitivities a few degrees by pressuring peers to match them. Or they might simply annoy everyone. But on social networks, grandstanders are systematically rewarded and amplified. This can trigger “a moral arms race,” Tosi and Warmke cautioned, in which people “adopt extreme and implausible views, and refuse to listen to the other side.”
125
If this were just a few internet forums, the consequences might be some unpleasant arguments. But by the mid-2010s social networks had become the vector through which much of the world’s news was consumed and interpreted. This created a world, Tosi and Warmke warned in a follow-up study with the psychologist Joshua Grubbs, defined by “homogeneity, ingroup/outgroup biases, and a culture that encourages outrage.”
126
the platform’s extreme bias toward outrage meant that misinformation prevailed, which created demand for more outrage-affirming rumors and lies.
126
Such anger creates a drive, sometimes overwhelming, for finding someone to punish. In a disturbing experiment, subjects were asked to assign a punishment for someone else’s moral transgression. They became harsher when led to believe that they were being watched, harsher still when told that their audience was highly political or ideological. Many heightened the punishment even if they thought their victim did not deserve it. Their motivation was simple: they expected that cruelty would make the observers like them more.
The effect scales; people express more outrage, and demonstrate more willingness to punish the undeserving, when they think their audience is larger. And there is no larger audience on earth than Twitter or Facebook.
133
In politics, the results rarely privileged liberation. When two scholars analyzed 300 million tweets sent during the 2012 presidential campaign, they found that false tweets had consistently outpaced true ones. The rumors and lies indulged or encouraged anger at the other side, the scholars warned, widening the polarization that was already one of the gravest ailments facing American democracy. The resulting division was opening space for opportunists.
136
Chaslot took on an essential component of this vision: search. Traditional search relies on keywords: type in whales and you get a list of the newest or most-watched videos tagged with that word. Chaslot’s team would replace that with an A.I. designed to identify the video that best served the user’s interests.
136
It could even suggest what to watch next, guiding users through an infinite world of discovery and delight. “It was a work,” Chaslot told me, “that had a huge, positive impact on actual, day-to-day life, for so many people.”
137
Google, Facebook, and others hoovered up the top names in the field of machine learning. Many got a version of the same assignment as Chaslot. Rather than identify spam, they would build machines that would learn precisely what combinations of text, images, and sounds would best keep us scrolling.
138
The page would now recommend, alongside your video, thumbnails of a dozen others you might watch next:
138
Each is selected from among YouTube’s billions of videos by a corporate A.I. shorthanded as “the algorithm”—one of the most powerful machine-learning systems in consumer tech. Its selections, guided by the power of machine learning, proved enormously effective. “Within a few months, with a small team, we had an algorithm that increased watch time to generate millions of dollars of additional ad revenue,” Chaslot said, “so it was really, really exciting.”
139
YouTube’s system seeks something more far-reaching than a monthly subscription fee. Its all-seeing eye tracks every detail of what you watch, how long you watch it, what you click on next. It monitors this across two billion users, accruing what is surely the largest dataset on viewer preferences ever assembled, which it constantly scans for patterns. Chaslot and others tweaked the system as it went, nudging its learning process to better accomplish its goal: maximum watch time.
139
One of the algorithm’s most powerful tools is topical affinity. If you watch a cat video all the way through, Chaslot explained, YouTube will show you more on return visits. It will especially push whatever cat videos it has deemed most effective at capturing attention. Say, a long compilation of outrageous kitten bloopers.
139
The effect is to pull users toward ever more titillating variations on their interests.
140
Just as with Twitter and Reddit, outrage and tribalism activate users’ emotions most effectively on YouTube, making them watch more and more videos—exactly what Goodrow had asked Chaslot’s team to prioritize.
141
That fall, in 2012, at a YouTube leadership conference in Los Angeles, an executive pulled aside Goodrow and a few others to tell them he was going to make a surprise announcement. The company would reorient itself around an all-consuming goal: to increase daily watch time by a factor of ten.
141
When could they get this done? the executive wanted to know. What was the time frame? Goodrow answered that 2015 would be too soon. But 2017, he wrote, “sounded weird” because it was a prime number.
Note: a sure sign he is opposed to Cicadas values
141
in 2011.
141
a thirty-year-old activist named Eli Pariser walked on stage to warn the audience of tech executives and engineers that their algorithms might threaten democracy itself.
142
The simplest algorithmic sorting can alter people’s attitudes severely enough to swing elections. In one 2015 experiment, Americans were told to choose between two fictional candidates by researching them online. Each participant was shown the same thirty search results on a Google mockup, but in different orders. Participants consistently gave the higher-ranked results greater psychological weight, even when they read all thirty of them. The effect, the experimenters concluded, could alter up to 20 percent of undecided participants’ voting intentions. The study’s author, Robert Epstein, a psychologist and founder of the Cambridge Center for Behavioral Studies, noted in an August 2015 article, “America’s next president could be eased into office not just by TV ads or speeches, but by Google’s secret decisions.”
143
Pariser’s fear, several years earlier, had been more fundamental. “There’s this epic struggle going on between our future, aspirational selves and our more impulsive, present selves,” he said. Even in 2011, years before YouTube or Facebook superpowered their systems to such destructive results, these earlier, simpler algorithms already reliably took the side of the impulses. And they usually won, proliferating “invisible autopropaganda, indoctrinating us with our own ideas.”
143
At YouTube, as the algorithm got one upgrade after another, some in the company’s trenches, like Chaslot, came to fear the system was pushing users into dangerous echo chambers
154
Facebook to select what posts users see and what groups they are urged to join; Twitter to surface posts that might entice a user to keep scrolling and tweeting.
“We design a lot of algorithms so we can produce interesting content for you,” Zuckerberg said in an interview. “It analyzes all the information available to each user and it actually computes what’s going to be the most interesting piece of information.” An ex-Facebooker put it more bluntly: “It is designed to make you want to keep scrolling, keep looking, keep liking.” Another: “That’s the key. That’s the secret sauce. That’s how, that’s why we’re worth X billion dollars.”
155
In 2014, the same year that Wojcicki took over YouTube, Facebook’s algorithm replaced its preference for Upworthy-style clickbait with something even more magnetic: emotionally engaging interactions. Across the second half of that year, as the company gradually retooled its systems, the platform’s in-house researchers tracked 10 million users to understand the effects. They found that the changes artificially inflated the amount of pro-liberal content that liberal users saw and the amount of pro-conservative content that conservatives saw. Just as Pariser had warned. The result, even if nobody at Facebook had consciously intended as much, was algorithmically ingrained hyperpartisanship. This was more powerful than sorting people into the Facebook equivalent of a Fox News or MSNBC news feed, because while the relationship between a cable TV network and the viewer is one-way, the relationship between a Facebook algorithm and the user is bidirectional. Each trains the other. The process, Facebook researchers put it, somewhat gingerly, in an implied warning that the company did not heed, was “associated with adopting more extreme attitudes over time and misperceiving facts about current events.”
159
Guillaume Chaslot
159
The man was watching YouTube, video after video, all discussing conspiracies. Chaslot’s first thought was an engineer’s: “His watch session is fantastic.”
159
“He was telling me, ‘Oh, but there are so many videos, it has to be true,’” Chaslot said. “What convinced him was not the individual videos, it was the repetition. And the repetition came from the recommendation engine.”
160
YouTube was exploiting a cognitive loophole known as the illusory truth effect. We are, every hour of every day, bombarded with information. To cope, we take mental shortcuts to quickly decide what to accept or reject. One is familiarity; if a claim feels like something we’ve accepted as true before, it probably still is. It’s a gap in our mental defenses you could drive a truck through. In experiments, research subjects bombarded with the phrase “the body temperature of a chicken” will readily agree with variations like “the body temperature of a chicken is 144 degrees.” Chaslot’s seatmate had been exposed to the same crazed conspiracies so many times that his mind likely mistook familiarity for the whiff of truth. As with everything else on social media, the effect is compounded by a false sense of social consensus, which triggers our conformity instincts.
164
To users, for whom the algorithm was invisible, these felt like powerful social cues. It was as if your community had suddenly decided that it valued provocation and outrage above all else, rewarding it with waves of attention that were, in reality, algorithmically generated. And because the algorithm down-sorted posts it judged as unengaging, the inverse was true, too. It felt as if your peers suddenly scorned nuance and emotional moderation with the implicit rejection of ignoring you. Users seemed to absorb those cues, growing meaner and angrier, intent on humiliating out-group members, punishing social transgressors, and validating one another’s worldviews.
168
Six
The Fun House Mirror
176
William Brady, the onetime social media brawler on behalf of veganism, as an undergrad, who was now a psychologist exploring how negative emotions spread. Brady was embedded with a New York University lab developing new methods for analyzing social media. On Twitter, like everywhere else, Trump was all outrage—against minorities, against institutions—as a motivator to rally his supporters. Brady knew that moral outrage can become infectious in groups, and that it can alter the mores and behaviors of people exposed to it. Was it possible that social media, more than just amplifying Trump, actually pulled Americans closer to his us-versus-them, tear-it-all-down way of thinking?
176
His team scraped half a million tweets that referenced climate change, gun control, or same-sex marriage, using them as a proxy for political discussion. Language-detection programs tested each post, and the person who sent it, for things like emotional sentiment and political attitude. What kind of messages traveled the farthest? Happy messages? Sad messages? Conservative or liberal messages? The results were noisy. Happy tweets, for example, spread too inconsistently for Brady to conclude that the platform had an effect one way or the other. But on one metric, results rang clear: across topics, across political factions, what psychologists refer to as “moral-emotional words” consistently boosted any tweet’s reach.
177
Moral-emotional words convey feelings like disgust, shame, or gratitude. (“Refugees deserve compassion.” “That politician’s views are repulsive.”) More than just words, these are expressions of, and calls for, communal judgment, positive or negative. When you say, “Suzy’s behavior is appalling,” you’re really saying, “Suzy has crossed a moral line; the community should take notice and maybe even act.” That makes these words different from either narrowly emotional sentiments (“Overjoyed at today’s marriage equality ruling”) or purely moral ones (“The president is a liar”), for which Brady’s effect didn’t appear. Tweets with moral-emotional words, he found, traveled 20 percent farther—for each moral-emotional word. The more of them in a tweet, the farther it spread. Here was evidence that social media boosted not just Trump, who used more moral-emotional words than other candidates, but his entire mode of politics. Hillary Clinton’s tweets, which emphasized rising above outrages rather than stoking them, underperformed.
178
Brady found something else. When a liberal posted a tweet with moral-emotional words, its reach substantially increased among other liberals, but declined with conservatives. (And vice versa.) It won the user more overall attention and validation, in other words, at the cost of alienating people from the opposing side. Proof that Twitter encouraged polarization. The data also suggested that users, however unconsciously, obeyed those incentives, increasingly putting down people on the other side. “Negative posts about political out-groups tend to receive much more engagement on Facebook and Twitter,” Steve Rathje, a Cambridge scholar, said in summarizing a later study that drew on Brady’s research. But this was not particular to partisanship: the effect privileges any sentiment, and therefore any politics, built on disparaging social out-groups of any kind.
182
YouTube built it into an algorithm upgrade called Reinforce (though a company engineer described its actual aim as increasing watch time).
183
contact theory. Coined after World War II to explain why desegregated troops became less prone to racism, the theory suggested that social contact led distrustful groups to humanize one another.
183
But subsequent research has shown that this process works only under narrow circumstances: managed exposure, equality of treatment, neutral territory, and a shared task. Simply mashing hostile tribes together, researchers repeatedly found, worsens animosity.
184
Even in its most rudimentary form, the very structure of social media encourages polarization. Reading an article and then the comments field beneath it, an experiment found, leads people to develop more extreme views on the subject in the article. Control groups that read the article with no comments became more moderate and open-minded. It wasn’t that the comments themselves were persuasive; it was the mere context of having comments at all. News readers, the researchers discovered, process information differently when they are in a social environment: social instincts overwhelm reason, leading them to look for affirmation of their side’s righteousness.
Note: solitude doesn’t make us more less emotional and more “rational,” it simply takes the social pressure away
185
Facebook groups amplify this effect even further. By putting users in a homogenous social space, studies find, groups heighten their sensitivity to social cues and conformity. This overpowers their ability to judge false claims and increases their attraction to identity-affirming falsehoods, making them likelier to share misinformation and conspiracies. “When we encounter opposing views in the age and context of social media, it’s not like reading them in a newspaper while sitting alone,” the sociologist Zeynep Tufekci has written. “It’s like hearing them from the opposing team while sitting with our fellow fans in a football stadium.… We bond with our team by yelling at the fans of the other one.”
185
Tweaking the algorithms to push users one way or another, toward fellow partisans or away from them, would just end up producing different versions of the dangerous forces made all but inevitable by the fundamental design of the platforms. It was why social scientists were among the first to come around to the view that, as media scholar Siva Vaidhyanathan put it, “the problem with Facebook is Facebook. It’s not any particular attribute along the margins that can be fixed and reformed.”