The amount of online imagery depicting children being sexually abused and exploited is out of control and it’s only getting worse.
In a new report, The New York Times investigated how technology companies and the government are failing to keep what it calls a “criminal underworld” of disturbing, explicit child pornography from spiraling out of control. Last year, tech companies reported finding a record 45 million online photos and videos of the abuse more than twice the volume of what they found the year prior.
As the internet has expanded, so has the availability of content that depicts children, some of whom are only three or four years old, being tortured and abused.
“Historically, you would never have gone to a black market shop and asked, ‘I want real hard-core with 3-year-olds,’” said Yolanda Lippert, a prosecutor in Cook County, Ill., who leads a team investigating online child abuse. “But now you can sit seemingly secure on your device searching for this stuff, trading for it.”
Similar content has always been a problem online, just not on this level; a decade ago, the reported number of photos and videos found in a year was only around one million.
Law enforcement agencies assigned to tackle the problem say they are understaffed and underfunded, but
technology companies that have done more to enable the spread of horrific and tragic content than they have to stop it are at the heart of the problem.
After years of uneven monitoring of the material, several major tech companies, including Facebook and Google, stepped up surveillance of their platforms. In interviews, executives with some companies pointed to the voluntary monitoring and the spike in reports as indications of their commitment to addressing the problem.
But police records and emails, as well as interviews with nearly three dozen local, state and federal law enforcement officials, show that some tech companies still fall short. It can take weeks or months for them to respond to questions from the authorities, if they respond at all. Sometimes they respond only to say they have no records, even for reports they initiated.
Hany Farid, a professor of digital analytics at the University of California-Berkeley, worked with Microsoft to develop technology in 2009 to detect child sexual abuse material. Farid told the Times that tech companies have been hesitant to dig into the issue.
“The companies knew the house was full of roaches, and they were scared to turn the lights on,” he said. “And then when they did turn the lights on, it was worse than they thought.”
The story is a familiar one for technology companies. Egregious, exploitative content proliferates on their platforms and the companies are often slow to take action. Versions of this have played out over the last several years with hate groups, terrorist groups and other types of abusive and damaging content. Often, if resolution comes, it’s only after public outcry.