A new lawsuit may force YouTube to own up to the mental health consequences of content moderation

Facebook agreed to pay out $52 million to moderators suffering from PTSD and other conditions — and now YouTube is being asked to do the same

For big tech platforms, one of the more urgent questions to arise during the pandemic’s early months was how the forced closure of offices would change their approach to content moderation. Facebook, YouTube, and Twitter all rely on huge numbers of third-party contract workers to police their networks, and traditionally those workers have worked side by side in big offices. When tech companies shuttered their offices, they closed down most of their content moderation facilities as well.

Happily, they continued to pay their moderators — even those who could no longer work, because their jobs required them to use secure facilities. But with usage of social networks surging and an election on the horizon, the need for moderation had never been greater. And so Silicon Valley largely shifted moderation duties to automated systems.

The question was whether it would work — and this week, we began to get some details. Both Facebook and YouTube had warned that automated systems would make more mistakes than human beings. And they were right. Here’s James Vincent in The Verge:

Around 11 million videos were removed from YouTube between April and June, says the FT, or about double the usual rate. Around 320,000 of these takedowns were appealed, and half of the appealed videos were reinstated. Again, the FT says that’s roughly double the usual figure: a sign that the AI systems were over-zealous in their attempts to spot harmful content.

As YouTube’s chief product officer, Neal Mohan, told the FT: “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”

It turns out that automated systems didn’t take down a slightly higher number of videos — they took down double the number of videos. This is worth thinking about for all of us, but especially those who complain that technology companies censor too much content. For a lot of reasons — some of which I’ll get to in a minute — companies like YouTube are under increasing pressure to both remove more bad posts and to do so automatically. Those systems will surely improve over time, but the past few months have shown us the limits of that approach. They’ve also shown that when you pressure tech companies to remove more harmful posts — for good reasons — the tradeoff is an uptick in censorship.

We almost never talk about those two pressures in tandem, and yet it’s essential for crafting solutions that we can all live with.

There’s another, more urgent tradeoff in content moderation: the use of automated systems that are error-prone but invincible, versus the use of human beings who are much more skilled but vulnerable to the effects of the job.

Last year, I traveled to Austin and to Washington, DC to profile current and former moderators for YouTube and Google. I spent most of my time with people who work on YouTube’s terror queue — the ones who examine videos of violent extremism each day to remove it from the company’s services. It was part of a year-long series I did about content moderators that attempted to document the long-term consequences of doing this work. And at YouTube, just as at Facebook, many of the moderators I spoke to suffer from post-traumatic stress disorder.

One of those moderators, who I called Peter in the story, described his daily life to me this way:

Since he began working in the violent extremism queue, Peter noted, he has lost hair and gained weight. His temper is shorter. When he drives by the building where he works, even on his off days, a vein begins to throb in his chest.

“Every day you watch someone beheading someone, or someone shooting his girlfriend,” Peter tells me. “After that, you feel like wow, this world is really crazy. This makes you feel ill. You’re feeling there is nothing worth living for. Why are we doing this to each other?”

I thought of Peter this week while reading about a proposed new lawsuit filed on behalf of workers like him. Here’s Queenie Wong at CNET:

A former content moderator is suing Google-owned YouTube after she allegedly developed depression and symptoms associated with post-traumatic stress disorder from repeatedly watching videos of beheadings, child abuse and other disturbing content.

“She has trouble sleeping and when she does sleep, she has horrific nightmares. She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind,” says the lawsuit, which was filed in a California superior court on Monday. The former moderator also can’t be in crowded places because she’s afraid of mass shootings, suffers from panic attacks and has lost friends because of her anxiety. She also has trouble being around kids and is now frightened to have children, according to the lawsuit.

The law firm involved in the suit was also part of a similar suit against Facebook, Wong reported. That’s a significant detail in large part of what Facebook did in that case: agree to settle it, for $52 million. That settlement, which still requires final approval from a judge, applies only to Facebook’s US moderators. And with similar suits pending around the world, the final cost to Facebook will likely be much higher.

After talking to more than 100 content moderators at services of all sizes, it seems clear that the work can take a similar toll no matter where they might have worked. Only a fraction of employees may develop full-blown PTSD from viewing disturbing content daily, but others will develop other serious mental health conditions. And because tech companies have largely outsourced this work to vendors, that cost has largely been hidden to them.

I asked YouTube what it made of the new lawsuit.

“We cannot comment on pending litigation, but we rely on a combination of humans and technology to remove content that violates our Community Guidelines, and we are committed to supporting the people who do this vital and necessary work,” a spokesman said. “We choose the companies we partner with carefully and work with them to provide comprehensive resources to support moderators’ well-being and mental health, including by limiting the time spent each day reviewing content.”

Facebook told me all the same things, before agreeing to pay out $52 million.

Anyway, I write about these stories in tandem today to highlight just how hard the tradeoffs are here. Rely too much on machines and they’ll remove lots of good speech. Rely too much on human beings and they’ll wind up with debilitating mental health conditions. So far, no global-scale technology company has managed to get this balance right. In fact, we still have no real agreement on what getting it “right” would even look like.

We do know, however, that employers are responsible for protecting their moderators’ health. It took a lawsuit from contractors to get Facebook to acknowledge the harms of moderating extremist content. And when this new lawsuit is ultimately resolved, I’d be surprised if YouTube weren’t forced to acknowledge that, too.