In one Telegram group chat about the bot, its owner says that Telegram has blocked mentions of its name. However, WIRED was unable to confirm this or any action taken by Telegram. Neither Telegram’s spokesperson or the service’s founder, Pavel Durov, responded to requests for comment. The company, which is believed to be based in Dubai but has servers all around the world, has never publicly commented about the harm caused by the Telegram bot or its continued position to allow it to operate.

Since it was founded in 2013, Telegram has positioned itself as a private space for free speech, and its end-to-end encrypted mode has been used by journalists and activists around the world to protect privacy and evade censorship. However, the messaging app has run into trouble with problematic content. In July 2017, Telegram said it would create a team of moderators to remove terrorism-related content after Indonesia threatened it with a ban. Apple also temporarily removed it from its App Store in 2018 after finding inappropriate content on the platform.

“I think they [Telegram] have a very libertarian perspective towards content moderation and just any sort of governance on their platform,” says Mahsa Alimardani, a researcher at the Oxford Internet Institute. Alimardani, who has worked with activists in Iran, points to Telegram notifying its users about a fake version of the app created by authorities in the country. “It seems that the times that they have actually acted, it’s when state authorities have got involved.”

On October 23, Italy’s data protection body, the Garante per la Protezione dei dati Personali, opened an investigation into Telegram and has asked it to provide data. In a statement, the regulator said the nude images generated by the bot could cause “irreparable damage” to their victims. Since Italian officials opened their investigation, Patrini has conducted more research looking for deepfake bots on Telegram. He says there are a number of Italian-language bots that appear to offer the same functionalities as the one Sensity previously found, however they do not appear to be working.

Separate research from academics of at the University of Milan and the University of Turin has also found networks of Italian-language Telegram groups, some of which were private and could only be accessed by invitation, sharing non-consensual intimate images of women that don’t involve deepfake technology. Some groups they found had more than 30,000 members and required members to share non-consensual images or be removed from the group. One group focused on sharing images of women that were taken in public places without their knowledge.

“Telegram should look inward and hold itself accountable,” says Honza Červenka, a solicitor at law firm McAllister Olivarius, which specializes in non-consensual images and technology. Červenka says that new laws are needed to force tech companies to better protect their users and clamp down on the use of abusive automation technology. “If it continues offering the Telegram Bot API to developers, it should institute an official bot store and certify bots the same way that Apple, Google, and Microsoft do for their app stores.” However, Červenka adds there is little government or legal pressure being put in place to make Telegram take this kind of step.

Patrini warns that deepfake technology is quickly advancing, and the Telegram bot is a sign of what is likely to happen in the future. The bot on Telegram was the first time this type of image abuse has been seen at such a large scale, and it is easy for anyone to use—no technical expertise is needed. It was also one of the first times that members of the public were targeted with deepfake technology. Previously celebrities and public figures were the targets of non-consensual AI porn. But as the technology is increasingly democratized, more instances of this type of abuse will be discovered online, he says.

“This was one investigation, but we are finding these sorts of abuses in multiple places on the internet,” Patrini explains. “There are, at a smaller scale, many other places online where images are stolen or leaked and are repurposed, modified, recreated, and synthesized, or used for training AI algorithms to create images that use our faces without us knowing.”

This story originally appeared on WIRED UK.


More Great WIRED Stories

You May Also Like

I’m a sleep expert and here’s how to cure jet lag

Jetting off to a faraway paradise often comes at a price –…

Disney Plus’ password-sharing crackdown is starting sooner than expected – here’s what you need to know

Disney+ has already admitted it will be clamping down on users sharing…

ChatGPT is insulting, lying and gaslighting users in ‘unhinged’ messages 

Microsoft‘s ChatGPT is going off the rails and sending users ‘unhinged’ messages. …

RSPB urges people to leave out fresh water in their gardens to help birds keep cool during heatwave 

A heatwave is currently sweltering the UK, with scorching temperatures expected to…