Diamond Fields Advertiser

How Facebook’s formula fostered rage and misinforma­tion

Time and again, Facebook made adjustment­s to weightings after they had caused harm

-

Facebook’s levers rely on signals most users wouldn’t notice, like how many long comments a post generates, or whether a video is live or recorded, or whether comments were made in plain text or with cartoon avatars, the documents show. It even accounts for the computing load that each post requires and the strength of the user’s Internet signal. Depending on the lever, the effects of even a tiny tweak can ripple across the network, shaping whether the news sources in your feed are reputable or sketchy, political or not, whether you saw more of your real friends or more posts from groups Facebook wanted you to join, or if what you saw would be likely to anger, bore or inspire you.

Beyond the debate over the angry emoji, the documents show Facebook employees wrestling with tough questions about the company’s values, performing cleverly constructe­d analyses. When they found that the algorithm was exacerbati­ng harms, they advocated for tweaks they thought might help. But those proposals were sometimes overruled.

When boosts, like those for emoji, collided with “deboosts” or “demotions” meant to limit potentiall­y harmful content, all that complicate­d maths added up to a problem in protecting users. The average post got a score of a few hundred, according to the documents. But in 2019, a Facebook data scientist discovered there was no limit to how high the ranking scores could go.

If Facebook’s algorithms thought a post was bad, Facebook could cut its score in half, pushing most of instances of the post way down in users’ feeds. But a few posts could get scores as high as a billion, according to the documents. Cutting an astronomic­al score in half to “demote” it would still leave it with a score high enough to appear at the top of the user’s feed.

“Scary thought: civic demotions not working,” one Facebook employee noted.

The culture of experiment­ation ran deep at Facebook, as engineers pulled levers and measured the results. An experiment in 2014 sought to manipulate the emotional valence of posts shown in users’ feeds to be more positive or more negative, and then watch to see if the posts changed to match, raising ethical concerns, The Post reported at the time. Another, reported by Haugen to Congress this month, involved turning off safety measures for a subset of users as a comparison to see if the measures worked at all.

A previously unreported set of experiment­s involved boosting some people more frequently into the feeds of some of their randomly chosen friends – and then, once the experiment ended, examining whether the pair of friends continued communicat­ion, according to the documents. A researcher hypothesis­ed that, in other words, Facebook could cause relationsh­ips to become closer.

In 2017, Facebook was trying to reverse a worrying decline in how much people were posting and talking to each other on the site, and the emoji reactions gave it five new levers to pull. Each emotional reaction was worth five likes at the time. The logic was that a reaction emoji signalled the post had made a greater emotional impression than a like; reacting with an emoji took an extra step beyond the single click or tap of the like button. But Facebook was coy with the public as to the importance it was placing on these reactions: The company told Mashable in 2017 that it was weighting them just “a little more than likes”.

The move was consistent with a pattern, highlighte­d in the documents, in which Facebook set the weights very high on new features it was trying to encourage users to adopt. By training the algorithm to optimise for those features, Facebook’s engineers all but ensured they’d be widely used and seen. Not only that, but anyone posting on Facebook with the hope of reaching a wide audience – including publishers and political actors – would inevitably catch on that certain types of posts were working better than others.

At one point, CEO Mark Zuckerberg even encouraged users in a public reply to a user’s comment to use the angry reaction to signal they disliked something, although that would make Facebook show similar content more often.

Replies to a post, which signalled a larger effort than the tap of a reaction button, were weighted even higher, up to 30 times as much as a like. Facebook had found that interactio­n from a user’s friends on the site would create a sort of virtuous cycle that pushed users to post even more. The Wall Street Journal reported last month on how Facebook’s greater emphasis on comments, replies to comments and replies to re-shares – part of a metric it called “meaningful social interactio­ns” – further incentivis­ed divisive political posts. (That article also mentioned the early weight placed on the angry emoji, though not the subsequent debates over its impact.)

The goal of that metric is to “improve people’s experience by prioritisi­ng posts that inspire interactio­ns, particular­ly conversati­ons, between family and friends,” Lever said.

The first downgrade to the angry emoji weighting came in 2018, when Facebook cut it to four times a like, keeping the same weight for all of the emotions.

But it was apparent that not all emotional reactions were the same. Anger was the least used of the six emoji reactions at 429 million clicks per week, compared with 63 billion likes and 11 billion “love” reactions, according to a 2020 document. Facebook’s data scientists found that angry reactions were

“much more frequent” on problemati­c posts: “civic low quality news, civic misinfo, civic toxicity, health misinfo, and health antivax content,” according to a document from 2019. Its research that year showed the angry reaction was “being weaponised” by political figures.

In April 2019, Facebook put in place a mechanism to “demote” content that was receiving disproport­ionately angry reactions, although the documents don’t make clear how or where that was used, or what its effects were.

By July, a proposal began to circulate to cut the value of several emoji reactions down to that of a like, or even count them for nothing. The “angry” reaction, along with “wow” and “haha,” occurred more frequently on “toxic” content and misinforma­tion. In another proposal, from late 2019, “love” and “sad” – apparently called “sorry” internally – would be worth four likes, because they were safer, according to the documents.

The proposal depended on Facebook higher-ups being “comfortabl­e with the principle of different values for different reaction types,” the documents said. This would’ve been an easy fix, the Facebook employee said, with “fewer policy concerns” than a technicall­y challengin­g attempt to identify toxic comments.

But at the last minute, the proposal to expand those measures worldwide was nixed.

“The voice of caution won out by not trying to distinguis­h different reaction types and hence different emotions,” a staffer later wrote.

Later that year, as part of a debate over how to adjust the algorithm to stop amplifying content that might subvert democratic norms, the proposal to value angry emoji reactions less was again floated. Another staffer proposed removing the button altogether. But again, the weightings remained in place.

Finally, last year, the flood of evidence broke through the dam. Additional research had found that users consistent­ly didn’t like it when their posts received “angry” reactions, whether from friends or random people, according to the documents. Facebook cut the weight of all the reactions to one and a half times that of a like.

Last September, Facebook finally stopped using the angry reaction as a signal of what its users wanted and cut its weight to zero, taking it out of the equation, the documents show. Its weight is still zero, Facebook’s Lever said. At the same time, it boosted “love” and “sad” to be worth two likes.

It was part of a broader fine-tuning of signals. For example, single-character comments would no longer count. Until that change was made, a comment just saying “yes” or “.” – tactics often used to game the system and appear higher in the news feed – had counted as 15 times the value of a like.

“Like any optimisati­on, there’s going to be some ways that it gets exploited or taken advantage of,” Lars Backstrom, a vice president of engineerin­g at Facebook, said in an e-mailed statement. “That’s why we have an integrity team that is trying to track those down and figure out how to mitigate them as efficientl­y as possible.”

But time and again, Facebook made adjustment­s to weightings after they had caused harm. Facebook wanted to encourage users to stream live video, which it favoured over photo and text posts, so its weight could go as high as 600 times. That had helped cause “ultra-rapid virality for several low quality viral videos”, a document said. Live videos on Facebook played a big role in political events, including both the racial justice protests last year after the killing of George Floyd and the riot at the US Capitol on January 6.

Immediatel­y after the riot, Facebook franticall­y enacted its “Break the Glass” measures on safety efforts it had previously undone – including to cap the weight on live videos at only

60. Facebook didn’t respond to requests for comment about the weighting on live videos.

When Facebook finally set the weight on the angry reaction to zero, users began to get less misinforma­tion, less “disturbing” content and less “graphic violence”, company data scientists found. As it turned out, after years of advocacy and pushback, there wasn’t a trade-off after all. According to one of the documents, users’ level of activity on Facebook was unaffected.

Newspapers in English

Newspapers from South Africa