Twitch Will Act on ‘Serious’ Offenses That Happen Off-Stream

Twitch is finally coming to terms with its responsibility as a king-making microcelebrity machine, not just a service or a platform. Today, the Amazon-owned company announced a formal and public policy for investigating streamers’ serious indiscretions in real life, or on services like Discord or Twitter.

Last June, dozens of women came forward with allegations of sexual misconduct against prominent video game streamers on Twitch. On Twitter and other social media, they shared harrowing experiences of streamers leveraging their relative renown to push boundaries, resulting in serious personal and professional harm. Twitch would eventually ban or suspend several accused streamers, a couple of whom were “partnered,” or able to receive money through Twitch subscriptions. At the same time, Twitch’s #MeToo movement sparked larger questions about what responsibility the service has for the actions of its most visible users both on and off stream.

In the course of investigating those problem users, Twitch COO Sara Clemens tells WIRED, Twitch’s moderation and law enforcement teams learned how challenging it is to review and make decisions based on users’ behavior IRL or on other platforms like Discord. “We realized that not having a policy to look at off-service behavior was creating a threat vector for our community that we had not addressed,” says Clemens. Today, Twitch is announcing its solution: an off-services policy. In partnership with a third-party law firm, Twitch will investigate reports of offenses like sexual assault, extremist behavior, and threats of violence that occur off stream.

“We’ve been working on it for some time,” says Clemens. “It’s certainly uncharted space.”

Twitch is at the forefront of helping to ensure that not only the content but the people who create it are safe for the community. (The policy applies to everyone: partnered, affiliate, and even relatively unknown steamers). For years, sites that support digital celebrity have banned users for off-platform indiscretions. In 2017, PayPal cut off a swath of white supremacists. In 2018, Patreon removed anti-feminist YouTuber Carl Benjamin, known as Sargon of Akkad, for racist speech on YouTube. Meanwhile, sites that directly grow or rely on digital celebrity don’t tend to rigorously vet their most famous or influential users, especially when those users relegate their problematic behavior to Discord servers or industry parties.

Despite never publishing a formal policy, kingmaking services like Twitch and YouTube have, in the past, deplatformed users they believe are detrimental to their communities for things they said or did elsewhere. Late 2020, YouTube announced it temporarily demonetized the prank channel NELK after the creators threw ragers at Illinois State University when the social gathering limit was 10. Those actions, and public statements about them, are the exception rather than the rule.

“Platforms sometimes have special mechanisms for escalating this,” says Kat Lo, moderation lead at nonprofit tech literacy company Meedan, referring to the direct lines high-profile users often have to company employees. She says off-services moderation has been happening at the biggest platforms for at least five years. But generally, she says, companies don’t often advertise or formalize these processes. “Investigating off-platform behavior requires a high capacity for investigation, finding evidence that can be verifiable. It’s difficult to standardize.”

Twitch in the second half of 2020 received 7.4 million user reports for “all types of violations,” and acted on reports 1.1 million times, according to its recent transparency report. In that period, Twitch acted on 61,200 instances of alleged hateful conduct, sexual harassment, and harassment. That’s a heavy lift. (Twitch acted on 67 instances of terrorism and escalated 16 cases to law enforcement). Although they make up a huge portion of user reports, harassment and bullying are not included among the listed behaviors Twitch will begin investigating off-platform unless it is also occurring on Twitch. Off-services behavior that will trigger investigations include what Twitch’s blog post calls “serious offenses that pose a substantial safety risk to the community”: deadly violence and violent extremism, explicit and credible threats of mass violence, hate group membership, and so on. While bullying and harassment are not included now, Twitch says that its new policy is designed to scale.

Source

Author: showrunner