Facebook Is Everywhere; Its Moderation Is Nowhere Close

Facebook launched support for Arabic in 2009 and scored a hit. Soon after, the service won plaudits for helping the mass protests known as the Arab Spring. By last year, Arabic was the third most common language on the platform, with people in the Middle East and North Africa spending more time each day with Facebook’s services than users in any other region.

When it comes to understanding and policing Arabic content, Facebook has been less successful, according to two internal studies last year. One, a detailed account of Facebook’s handling of Arabic, warns that the company’s human and automated reviewers struggle to comprehend the varied dialects used across the Middle East and North Africa. The result: In a region wracked by political instability, the company wrongly censors benign posts for promoting terrorism while exposing Arabic speakers to hateful speech they shouldn’t see.

“Arabic is not one language,” the study says. “It is better to consider it a family of languages—many of which are mutually incomprehensible.”

The documents on Facebook’s foibles with Arabic are part of a tranche of internal material, known collectively as The Facebook Papers, that shows the company struggling—or neglecting—to manage its platform in places that are far from its headquarters in California, in regions where the vast majority of its users live. Many of these markets are in economically disadvantaged parts of the world, afflicted by the kinds of ethnic tensions and political violence that are often amplified by social media.

The documents were disclosed to the Securities and Exchange Commission and provided to Congress in redacted form by legal counsel for ex-Facebook employee Frances Haugen. The redacted versions were reviewed by a consortium of news organizations, including WIRED.

The collection offers a limited view inside the social network but reveals enough to illustrate the immense challenge created by Facebook’s success. A site for rating the looks of women students at Harvard evolved into a global platform used by nearly 3 billion people in more than 100 languages. Perfectly curating such a service is impossible, but the company’s protections for its users seem particularly uneven in poorer countries. Facebook users who speak languages such as Arabic, Pashto, or Armenian are effectively second class citizens of the world’s largest social network.

Some of Facebook’s failings detailed in the documents involve genuinely hard technical problems. The company uses artificial intelligence to help manage problematic content—at Facebook’s scale humans cannot review every post. But computer scientists say machine learning algorithms don’t yet understand the nuances of language. Other shortcomings appear to reflect choices by Facebook, which made more than $29 billion in profit last year, about where and how much to invest.

For example, Facebook says nearly two-thirds of the people who use the service do so in a language other than English and that it regulates content in the same way globally. A company spokesperson said it has 15,000 people reviewing content in more than 70 languages and has published its Community Standards in 47. But Facebook offers its service in more than 110 languages; users post in still more.

A December 2020 memo on combating hate speech in Afghanistan warns that users can’t easily report problematic content because Facebook had not translated its community standards into Pashto or Dari, the country’s two official languages. Online forms for reporting hate speech had been only partially translated into the two languages, with many words presented in English. In Pashto, also widely spoken in Pakistan, the memo says Facebook’s translation of the term hate speech “does not seem to be accurate.”

“When combating hate speech on Facebook, our goal is to reduce its prevalence, which is the amount of it that people actually see,” a Facebook spokesperson said in a statement. The company recently released figures suggesting that on average, this has declined worldwide since mid-2020. “This is the most comprehensive effort to remove hate speech of any major consumer technology company, and while we have more work to do we remain committed to getting this right.”


Author: showrunner