Why AI Is Terrible At Content Moderation
PC Magazine|September 2019
Every day, Facebook’s artificial intelligence algorithms tackle the enormous task of finding and removing millions of posts containing spam, hate speech, nudity, violence, and terrorist propaganda.
Ben Dickson
Why AI Is Terrible At Content Moderation

And though the company has access to some of the world’s most coveted talent and tech, it’s struggling to find and remove toxic content fast enough.

In March, a shooter in New Zealand live-streamed the brutal killing of 51 people in two mosques on Facebook. The social media giant’s algorithms failed to detect the gruesome video. It took Facebook an hour to take the video down, and even then, the company was hard-pressed to deal with users who reposted the video.

Facebook recently published figures on how often its AI algorithms successfully find problematic content. Though the report shows that the company has made tremendous advances in its years-long effort to automate content moderation, it also highlights contemporary AI’s frequent failure to understand context.

NOT ENOUGH DATA

Artificial neural networks and deep-learning technologies, at the bleeding edge of artificial intelligence, have automated tasks previously beyond the reach of computer software. Some of these tasks include speech recognition, image classification, and natural language processing (NLP).

In many cases, the precision of neural networks exceeds human capabilities. For example, AI can predict breast cancer five years in advance. But deep learning also has limits. Primarily, it needs to be “trained” on numerous examples before it can function optimally. If you want to create a neural network to detect adult content, you must first show it millions of annotated examples. Without high-quality training data, neural networks can make dumb mistakes.

Last year, Tumblr declared it would ban adult content on its website and use machine learning to flag posts containing NSFW images. But a premature deployment of its AI model ended up blocking harmless content such as troll socks, LED jeans, and a picture of Joe Biden.

Esta historia es de la edición September 2019 de PC Magazine.

Comience su prueba gratuita de Magzter GOLD de 7 días para acceder a miles de historias premium seleccionadas y a más de 9,000 revistas y periódicos.

Esta historia es de la edición September 2019 de PC Magazine.

Comience su prueba gratuita de Magzter GOLD de 7 días para acceder a miles de historias premium seleccionadas y a más de 9,000 revistas y periódicos.

MÁS HISTORIAS DE PC MAGAZINEVer todo