Editor’s note: This story contains information surrounding instances of sexual violence.
Major tech companies based in the US are failing to stop the circulation of child sexual abuse materials on their platforms, according to an investigative report published by the New York Times on Saturday. Survivors fear the implications of the indefinite digital trail of their abuse.
New York Times investigative reporters Michael H. Keller and Gabriel J.X. Dance spoke to two sisters in the Midwest whose father documented their sexual abuse when they were 7 and 11. Photos of them showed up in 130 child sexual abuse investigations and their abuse has led to hospitalizations for suicidal thoughts.
The family of another survivor –– a teenage girl living on the West Coast –– fears the day that she will turn 18 and be faced with the potential trauma of receiving reports about the status of her online sexual abuse material. It could take years for Facebook, Google, and Microsoft to track the illegal images that are already online.
In addition to search engines, criminals use other technologies and platforms to find online sexual abuse materials, including chat apps and cloud storage for image sharing. Live streams using programs like the communications platform Zoom are harder to detect and don’t leave a trace, making them more difficult to regulate. And while child online sexual abuse is on the rise worldwide, vulnerable communities who live in developing countries that are receiving access to the internet for the first time are increasingly at risk.
Read More: This Is the Surprising Reason Child Sexual Abuse Is on the Rise
Apple does not scan its cloud storage for online sexual abuse materials and encrypts its messaging app which makes tracking “virtually impossible,” the New York Times said.
None of the largest cloud storage platforms — including Amazon Web Services, Dropbox, Google Drive, and Microsoft’s ONedDrive and Azure — scan for abuse material when files are uploaded, according to the New York Times’ reporting. Dropbox, Google, and Microsoft’s consumer products only scan for illegal images when people share them.
A Dropbox spokesperson told the New York Times that scanning for illicit videos was not a “top priority” in July, but the company said it started scanning some videos in October.
Facebook thoroughly scans its platforms –– accounting for 90% of the imagery flagged by tech companies in 2018 –– but it doesn’t search all of its databases, the New York Times found. Most illegal images on the platform are shared through Facebook Messenger but the app will eventually be encrypted, making it harder to detect these materials. Snapchat and Yahoo look for illegal photos, but not videos.
There is no set of standards for identifying illegal video content, and many major platforms, such as AOL, Snapchat, and Yahoo, don’t scan for them. AOL and Yahoo did not respond to the New York Times’ requests for comment about their policies. Snapchat is working with industry partners to figure out a solution, a spokesperson told the New York Times. The messaging app Kik did not scan for illicit videos as of October, but the company's new owner said on Friday that they've started.
Former employees at Microsoft, Twitter, and Tumblr told the New York Times that the major tech companies have known for years that videos of child sexual abuse were being shared on their platforms.
Some tech companies say they have been hesitant to look for abuse on their platforms because it can raise privacy concerns, according to the New York Times. Apple declined to specify how it scans its platforms, saying that revealing the information could help criminals.
Tech companies are putting in the minimal amount of effort to stop online child sexual abuse while maintaining a good reputation, Chad M.S. Steel, a professor at George Mason University who assisted federal investigators in abuse-related cases, told the New York Times.
The tech industry approved a process for sharing video security information to flag illicit material in 2017 as part of a project by the Technology Coalition, but the plan hasn’t taken off due to lack of action, according to the New York Times.
Developed in 2009, the software PhotoDNA is the main method for detecting illegal imagery. But PhotoDNA has its limitations because there’s no single list of known illegal materials, which makes it easy for images to go unnoticed, according to the report.
As part of its investigation, the New York Times created a computer program to scan Microsoft’s search engine Bing –– commonly used by child abusers –– for illicit materials. In response to a report commissioned by the tech news site TechCrunch in January, which exposed the availability of sexual abuse materials on the platform, Microsoft said it would ban search terms on Bing that led to this content. The New York Times was still able to pull up images flagged by PhotoDNA as inappropriate and the search engine recommended other search terms when users looked up child abuse websites.
The New York Times’ program also located known abuse imagery on the Yahoo and DuckDuckGo search engines. Both platforms said they relied on Microsoft to remove illegal content. Microsoft said it detected the problem within its scanning process but the New York Times program continued finding illicit images.
The New York Times search engine also scanned Google and did not come across abuse content, but the Canadian Center for Child Protection showed that images of sexual child abuse had been located on the search engine and the company hadn’t always removed them. The Times asked Google about the photos the Canadian Center discovered, and the search engine finally agreed to remove them, recognizing that they should’ve been taken down before. Two weeks after the images were removed, more photos that the Canadian Center identified as sexual child abuse became available, but Google said they didn’t meet the criteria to take them down.
More than 22 million videos of child sexual abuse were reported to the National Center for Missing and Exploited children in 2018. And a record 45 million photos and videos were flagged as online child sexual abuse material in 2018.
In May, the World Childhood Foundation USA (WCF) released the Economist Intelligence Unit (EIU)’s updated annual global index, “Out Of The Shadows — Shining Light on the Response to Sexual Abuse and Exploitation.” The EIU found that child sexual abuse and exploitation happens in wealthy and poor countries alike, but predators are increasingly seeking out victims in poor countries and areas affected by crisis and conflict where access to the internet and encrypted technology is growing.
Gender inequality is directly linked to sexual violence against children, EIU said. While girls are the primary victims of abuse, boys are overlooked and discouraged from reporting their cases due to stigma. The data to measure the scale of the problem is lacking, and countries have mostly worked within a legal framework rather than implementing policy, according to the report.
Sexual abuse can lead to symptoms of depression, anxiety, addiction, feelings of isolation, and mistrust. Child sexual abuse can jeopardize a person’s economic well-being, often leading to homelessness, unemployment, interrupted education, and difficulty building meaningful relationships.
“Every time an image of child sexual abuse is created, viewed, or shared online, child sexual abuse is perpetuated,” Samantha Grenville, director at the Economist Intelligence Unit, told Global Citizen earlier this year.
To stop further child sexual violence and exploitation, the EIU recommends engaging government agencies, the private sector (with an emphasis on information and communication technology companies), and civil society to protect children around the world.
You can read the New York Times’ full investigative story here, and learn more about the “Out of the Shadows” index and its findings here.