Twitch ‘Clips’ feature is used by predators to record and share child abuse

In the spring of 2023, a 12-year-old boy went live on Twitch, the popular livestreaming site owned by Amazon.com Inc, to eat a sandwich and play his French horn. Minutes later, about a dozen viewers joined him. Through Twitch’s chat, one asked him to do a somersault. Another requested that he show his muscles.

In response, the boy pulled his pants down. The whole thing ended in an instant, but one viewer, who was following over a hundred other Twitch accounts appearing to belong to children, used a feature called “clips” to capture the fleeting moment in a 20-second video. The resulting clip has since been viewed over 130 times.

Twitch is planning to expand the “clips” function this year. As part of the previously announced build-out, the company will be encouraging more users to turn ephemeral livestreaming events into short, on-demand videos for display on a soon-to-be-launched discovery feed – or for easy export to other social networks, including TikTok.

Now, ahead of the move, online safety experts are warning that the tool is already being exploited by child predators to record and share the abuse of underage Twitch users.

An analysis by Bloomberg News of nearly 1,100 clips on Twitch found that at least 83 of the short videos contain sexualised content involving children. The Canadian Centre for Child Protection, which reviewed the material, identified 34 that depict young users – primarily boys between the ages of 5 and 12 – showing body parts to the camera, often apparently following the encouragement of viewers during a livestream. Another 49 videos included sexualised content involving minors exposing body parts or being subjected to grooming efforts.

According to the Centre, the 34 most egregious clips have been viewed 2,700 times. The other explicit videos, the Centre noted, have been watched 7,300 times. When a viewer captures a livestream involving predation, it “becomes an almost permanent record of that sexual abuse”, said Stephen Sauer, the Centre’s director.

“There’s a broader victimisation that occurs once the initial livestream and grooming incident has happened because of the possibility of further distribution of this material,” he said.

Once Bloomberg alerted the company, Twitch removed the prohibited content. “Youth harm, anywhere online, is deeply disturbing,” Twitch chief executive officer Dan Clancy said in a statement. “Even one instance is too many, and we take this issue extremely seriously.”

Twitch initially launched the clips function in 2016. Over the past year, facing greater competition from ByteDance Ltd’s short-form video site TikTok, Twitch’s product team has made the expansion of clips a major focus. At the same time, the clips feature has remained among the least moderated on the site, according to people familiar with the safety protocols who asked for anonymity while discussing the inner workings of the company.

In the statement, Clancy said that “combating child predation meaningfully”, requires collaboration. He noted that Twitch is partnering with various agencies to do so and has “made significant progress”. By continuously screening the live content on Twitch, Clancy said, the company is preventing “the creation and spread of harmful clips at the source”. Twitch is also working retroactively to “delete and disable” harmful clips while making sure such videos “aren’t available through public domains or other direct links”.

“Like all other online services, this problem is one that we’ll continue to fight diligently,” he added.

Over the years, company executives have struggled to stamp out child grooming on Twitch, which hosts over 96,000 live channels and 7 million monthly broadcasters. In 2022, Bloomberg News reported that child predators routinely use Twitch to track kids in real time, with over 297,000 children apparently targeted.

Since the report, Twitch has announced several changes. The company launched a new phone-verification requirement and is developing technology to catch and terminate accounts belonging to kids under 13. The company has also made it harder for minors who have been previously banned to create new accounts. Twitch uses language analysis tools to detect child grooming and AI technology to flag nudity.

In April, Twitch laid off at least 15% of its internal trust and safety team and increased its reliance on outside providers to identify and remove problematic content. At the moment, Twitch focuses most of its monitoring efforts on its livestreams, which the company says are reviewed using human moderators, artificial intelligence and other tools. By contrast, when it comes to moderating clips, Twitch relies solely on its users to report instances of suspicious or upsetting material.

In the last quarter of 2022, Twitch detected 318 instances a day of ostensible child grooming, a fifth of which led to account suspensions, according to a recent report by Australia’s eSafety Commissioner.

Experts say moderation challenges are intrinsic to Twitch’s design. Typically, social media sites such as YouTube and Instagram can identify when users are attempting to upload abusive videos by comparing new content to previous sources featuring similar material. On Twitch’s livestreams, child predation happens in real time so there are no prior videos to compare with.

“Hash technology looks for something that’s a match to something seen previously,” said Lauren Coffren of the US National Center for Missing & Exploited Children. “Livestreaming means it’s brand new.”

In April, a bipartisan group of US Senators introduced the Protecting Kids on Social Media Act, a bill aiming to improve children’s safety on sites such as Twitch that feature user-generated content.

Sauer said that social-media companies can no longer be trusted to regulate themselves. “We’ve been on the sidelines watching the industry do voluntary regulation for 25 years now,” he said. “We know it’s just not working. We see far too many kids being exploited on these platforms. And we want to see government step in and say, ‘These are the safeguards you have to put in place’.” – Bloomberg