What happened when humans stopped managing social media content

Algorithmic content moderation allowed dangerous and possibly illegal material to slip through the net | Image via iStock

During the ongoing coronavirus pandemic, Facebook, Twitter and Google all used automation instead of people to scrub harmful content from their platforms

Nobody appreciated the content moderators until they were gone.

As the coronavirus pandemic swept the world, social media giants like Facebook, Google and Twitter did what other companies did. They sent workers home — including the tens of thousands of people tasked with sifting through mountains of online material and weeding out hateful, illegal and sexually explicit content.

Nobody appreciated managing social media content

In their place, the companies turned to algorithms to do the job. It did not go well.

The COVID-driven experiment represented a real-world baptism of fire for something social media companies have long dreamed of: using machine-learning tools and artificial intelligence — not humans — to police posts on their platforms.

When the social media giants announced the changes, they acknowledged the algorithms might struggle to discriminate between legitimate and illegitimate content. And indeed, the effects were almost immediate.

While far more content was flagged and removed for allegedly breaking the companies’ rules on what could be posted online, in some areas dangerous and possibly illegal material was more likely to slip past the machines.

Facebook and Google roughly doubled the amount of potentially harmful material they removed in the second quarter of this year compared with the three months through March, according to the companies’ most recent transparency reports. Twitter has yet to provide figures for 2020.

In Syria, where campaigners and journalists rely on social media to document potential war crimes, scores of activists’ accounts were closed down overnight — often with no right to appeal those decisions. Other content, including news articles and health information linked to the coronavirus, was similarly scrubbed from the internet as the machines got to work.

And yet, a lot of questionable posts remained untouched. In France, campaigners fighting against racism and anti-Semitism noticed a more than 40 percent increase in hate speech on Twitter. Less than 12 percent of those posts were removed, the groups said.

Read more: https://www.politico.eu