How to Fix Facebook, According to Its Own Employees

Facebook rejects the allegation. “At the heart of these stories is a premise which is false,” said spokesperson Kevin McAlister in an email. “Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or wellbeing misunderstands where our own commercial interests lie.”

On the other hand, the company recently fessed up to the precise criticism from the 2019 documents. “In the past, we didn’t address safety and security challenges early enough in the product development process,” it said in a September 2021 blog post. “Instead, we made improvements reactively in response to a specific abuse. But we have fundamentally changed that approach. Today, we embed teams focusing specifically on safety and security issues directly into product development teams, allowing us to address these issues during our product development process, not after it.” McAlister pointed to Live Audio Rooms, introduced this year, as an example of a product rolled out under this process.

If that’s true, it’s a good thing. Similar claims made by Facebook over the years, however, haven’t always withstood scrutiny. If the company is serious about its new approach, it will need to internalize a few more lessons.

Your AI Can’t Fix Everything

On Facebook and Instagram, the value of a given post, group, or page is mainly determined by how likely you are to stare at, Like, comment on, or share it. The higher that probability, the more the platform will recommend that content to you and feature it in your feed.

But what gets people’s attention is disproportionately what enrages or misleads them. This helps explain why low-quality, outrage-baiting, hyper-partisan publishers do so well on the platform. One of the internal documents, from September 2020, notes that “low integrity Pages” get most of their followers through News Feed recommendations. Another recounts a 2019 experiment in which Facebook researchers created a dummy account, named Carol, and had it follow Donald Trump and a few conservative publishers. Within days the platform was encouraging Carol to join QAnon groups.

Facebook is aware of these dynamics. Zuckerberg himself explained in 2018 that content gets more engagement as it gets closer to breaking the platform’s rules. But rather than reconsidering the wisdom of optimizing for engagement, Facebook’s answer has mostly been to deploy a mix of human reviewers and machine learning to find the bad stuff and remove or demote it. Its AI tools are widely considered world-class; a February blog post by chief technology officer Mike Schroepfer claimed that, for the last three months of 2020, “97% of hate speech taken down from Facebook was spotted by our automated systems before any human flagged it.”

The internal documents, however, paint a grimmer picture. A presentation from April 2020 notes that Facebook removals were reducing the overall prevalence of graphic violence by about 19 percent, nudity and pornography by about 17 percent, and hate speech by about 1 percent. A file from March 2021, previously reported by the Wall Street Journal, is even more pessimistic. In it, company researchers estimate “that we may action as little as 3-5% of hate and ~0.6% of [violence and incitement] on Facebook, despite being the best in the world at it.”

Source

Author: showrunner