Over 120,000 views of a video showing a boy being sexually assaulted. A recommendation engine suggesting that a user follow content related to exploited children. Users continually posting abusive material, delays in taking it down when it is detected and friction with organizations that police it.
All since Elon Musk declared that “removing child exploitation is priority #1” in a tweet in late November.
Under Mr. Musk’s ownership, Twitter’s head of safety, Ella Irwin, said she had been moving rapidly to combat child sexual abuse material, which was prevalent on the site — as it is on most tech platforms — under the previous owners. “Twitter 2.0” will be different, the company promised.
But a review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that the authorities consider the easiest to detect and eliminate.
After Mr. Musk took the reins in late October, Twitter largely eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by the authorities, the review shows. Twitter also stopped paying for some detection software considered key to its efforts.
All the while, people on dark-web forums discuss how Twitter remains a platform where they can easily find the material while avoiding detection, according to transcripts of those forums from an anti-abuse group that monitors them.
“If you let sewer rats in,” said Julie Inman Grant, Australia’s online safety commissioner, “you know that pestilence is going to come.”
In a Twitter audio chat with Ms. Irwin in early December, an independent researcher working with Twitter said illegal content had been publicly available on the platform for years and garnered millions of views. But Ms. Irwin and others at Twitter said their efforts under Mr. Musk were paying off. During the first full month of the new ownership, the company suspended nearly 300,000 accounts for violating “child sexual exploitation” policies, 57 percent more than usual, the company said.
The effort accelerated in January, Twitter said, when it suspended 404,000 accounts. “Our recent approach is more aggressive,” the company declared in a series of tweets on Wednesday, saying it had also cracked down on people who search for the exploitative material and had reduced successful searches by 99 percent since December.
Ms. Irwin, in an interview, said the bulk of suspensions involved accounts that engaged with the material or were claiming to sell or distribute it, rather than those that posted it. She did not dispute that child sexual abuse content remains openly available on the platform, saying that “we absolutely know that we are still missing some things that we need to be able to detect better.”
Inside Elon Musk’s Twitter
She added that Twitter was hiring employees and deploying “new mechanisms” to fight the problem. “We have been working on this nonstop,” she said.
Wired, NBC and others have detailed Twitter’s ongoing struggles with child abuse imagery under Mr. Musk. On Tuesday, Senator Richard J. Durbin, Democrat of Illinois, asked the Justice Department to review Twitter’s record in addressing the problem.
To assess the company’s claims of progress, The Times created an individual Twitter account and wrote an automated computer program that could scour the platform for the content without displaying the actual images, which are illegal to view. The material wasn’t difficult to find. In fact, Twitter helped promote it through its recommendation algorithm — a feature that suggests accounts to follow based on user activity.
Among the recommendations was an account that featured a profile picture of a shirtless boy. The child in the photo is a known victim of sexual abuse, according to the Canadian Center for Child Protection, which helped identify exploitative material on the platform for The Times by matching it against a database of previously identified imagery.
That same user followed other suspicious accounts, including one that had “liked” a video of boys sexually assaulting another boy. By Jan. 19, the video, which had been on Twitter for more than a month, had gotten more than 122,000 views, nearly 300 retweets and more than 2,600 likes. Twitter later removed the video after the Canadian center flagged it for the company.
In the first few hours of searching, the computer program found a number of images previously identified as abusive — and accounts offering to sell more. The Times flagged the posts without viewing any images, sending the web addresses to services run by Microsoft and the Canadian center.
One account in late December offered a discounted “Christmas pack” of photos and videos. That user tweeted a partly obscured image of a child who had been abused from about age 8 through adolescence. Twitter took down the post five days later, but only after the Canadian center sent the company repeated notices.
In all, the computer program found imagery of 10 victims appearing over 150 times across multiple accounts, most recently on Thursday. The accompanying tweets often advertised child rape videos and included links to encrypted platforms.
Alex Stamos, the director of the Stanford Internet Observatory and the former top security executive at Facebook, found the results alarming. “Considering the focus Musk has put on child safety, it is surprising they are not doing the basics,” he said.
Separately, to confirm The Times’s findings, the Canadian center ran a test to determine how often one video series involving known victims appeared on Twitter. Analysts found 31 different videos shared by more than 40 accounts, some of which were retweeted and liked thousands of times. The videos depicted a young teenager who had been extorted online to engage in sexual acts with a prepubescent child over a period of months.
The center also did a broader scan against the most explicit videos in their database. There were more than 260 hits, with more than 174,000 likes and 63,000 retweets.
“The volume we’re able to find with a minimal amount of effort is quite significant,” said Lloyd Richardson, the technology director at the Canadian center. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”
In 2019, The Times reported that many tech companies had serious gaps in policing child exploitation on their platforms. This past December, Ms. Inman Grant, the Australian online safety official, conducted an audit that found many of the same problems remained at a sampling of tech companies.
The Australian review did not include Twitter, but some of the platform’s difficulties are similar to those of other tech companies and predate Mr. Musk’s arrival, according to multiple current and former employees.
Twitter, founded in 2006, started using a more comprehensive tool to scan for videos of child sexual abuse last fall, they said, and the engineering team dedicated to finding illegal photos and videos was formed just 10 months earlier. In addition, the company’s trust and safety teams have been perennially understaffed, though the company continued expanding them even amid a broad hiring freeze that began last April, four former employees said.
Over the years, the company did build internal tools to find and remove some images, and the national center often lauded the company for the thoroughness of its reports.
The platform in recent months has also experienced problems with its abuse reporting system, which allows users to notify the company when they encounter child exploitation material. (Twitter offers a guide to reporting abusive content on its platform.)
The Times used its research account to report multiple profiles that were claiming to sell or trade the content in December and January. Many of the accounts remained active and even appeared as recommendations to follow on The Times’s own account. The company said it would need more time to unravel why such recommendations would appear.
To find the material, Twitter relies on software created by an anti-trafficking organization called Thorn. Twitter has not paid the organization since Mr. Musk took over, according to people familiar with the relationship, presumably part of his larger effort to cut costs. Twitter has also stopped working with Thorn to improve the technology. The collaboration had industrywide benefits because other companies use the software.
Ms. Irwin declined to comment on Twitter’s business with specific vendors.
Twitter’s relationship with the National Center for Missing and Exploited Children has also suffered, according to people who work there.
John Shehan, an executive at the center, said he was worried about the “high level of turnover” at Twitter and where the company “stands in trust and safety and their commitment to identifying and removing child sexual abuse material from their platform.”
After the transition to Mr. Musk’s ownership, Twitter initially reacted more slowly to the center’s notifications of sexual abuse content, according to data from the center, a delay of great importance to abuse survivors, who are revictimized with every new post. Twitter, like other social media sites, has a two-way relationship with the center. The site notifies the center (which can then notify law enforcement) when it is made aware of illegal content. And when the center learns of illegal content on Twitter, it alerts the site so the images and accounts can be removed.
Late last year, the company’s response time was more than double what it had been during the same period a year earlier under the prior ownership, even though the center sent it fewer alerts. In December 2021, Twitter took an average of 1.6 days to respond to 98 notices; last December, after Mr. Musk took over the company, it took 3.5 days to respond to 55. By January, it had greatly improved, taking 1.3 days to respond to 82.
The Canadian center, which serves the same function in that country, said it had seen delays as long as a week. In one instance, the Canadian center detected a video on Jan. 6 depicting the abuse of a naked girl, age 8 to 10. The organization said it sent out daily notices for about a week before Twitter removed the video.
In addition, Twitter and the U.S. national center seem to disagree about Twitter’s obligation to report accounts that claim to sell illegal material without directly posting it.
The company has not reported to the national center the hundreds of thousands of accounts it has suspended because the rules require that they “have high confidence that the person is knowingly transmitting” the illegal imagery and those accounts did not meet that threshold, Ms. Irwin said.
Mr. Shehan of the national center disputed that interpretation of the rules, noting that tech companies are also legally required to report users even if they only claim to sell or solicit the material. So far, the national center’s data show, Twitter has made about 8,000 reports monthly, a small fraction of the accounts it has suspended.
Ms. Inman Grant, the Australian regulator, said she had been unable to communicate with local representatives of the company because her agency’s contacts in Australia had quit or been fired since Mr. Musk took over. She feared that the staff reductions could lead to more trafficking in exploitative imagery.
“These local contacts play a vital role in addressing time-sensitive matters,” said Ms. Inman Grant, who was previously a safety executive at both Twitter and Microsoft.
Ms. Irwin said the company continued to be in touch with the Australian agency, and more generally she expressed confidence that Twitter was “getting a lot better” while acknowledging the challenges ahead.
“In no way are we patting ourselves on the back and saying, ‘Man, we’ve got this nailed,’” Ms. Irwin said.
Offenders continue to trade tips on dark-web forums about how to find the material on Twitter, according to posts found by the Canadian center.
On Jan. 12, one user described following hundreds of “legit” Twitter accounts that sold videos of young boys who were tricked into sending explicit recordings of themselves. Another user characterized Twitter as an easy venue for watching sexual abuse videos of all types. “People share so much,” the user wrote.
Ryan Mac and Chang Che contributed reporting.