Imagine someone taking your regular photos and turning them into inappropriate images without your consent. Sadly, with new AI tools called “nudify” bots, this is happening to millions of people, and it only takes a few clicks.
A recent investigation by Wired revealed that over 50 bots on Telegram allow users to create explicit photos and videos of people within minutes. These bots have millions of users, and the number is likely much higher than what’s reported. The rise of these tools highlights a major privacy issue, especially because there’s little control over who uses them and for what purpose.
How Did “Nudify” Bots Begin?
The misuse of AI-generated content started back in 2017 when the term “deepfake” was first coined. It all began when someone created a video that placed the face of actress Gal Gadot onto an existing adult video, making it look like she was in the video. This shocking misuse of technology opened the door to more unethical uses of AI.
Over time, AI technology became more advanced, leading to more realistic face-swapping and explicit deepfake content. However, the bots uncovered on Telegram are primarily used to remove clothes from photos, which can cause emotional harm to the victims.
The Dangers of AI Bots
Using these bots isn’t just creepy—it’s dangerous. These AI tools don’t always work as advertised, and many are scams that can leave users with low-quality images or even malware. Some bots require users to buy tokens to create images, turning this into a profitable but unethical business for creators.
Aside from that, creating and sharing these fake images can cause long-term harm to the victims. Nonconsensual intimate image abuse (NCII) is considered a form of sexual violence, and many victims experience emotional and psychological trauma. In addition, explicit deepfakes involving minors create even more disturbing content, which is illegal and damaging to society as a whole.
What’s Being Done to Stop It?
There are ongoing efforts to stop the spread of these harmful bots. In the United States, lawmakers have introduced the Deepfake Accountability Act to regulate the use of AI-generated content. Telegram has also updated its policy to cooperate with law enforcement, turning over user details when they’re involved in criminal activity. Some companies, like Google, have banned nonconsensual AI-generated porn from showing up in their search results.
However, despite these efforts, the market for these bots continues to grow.
Protect Yourself and Your Privacy Online
It’s important to be aware of how your online activity and social media posts can be misused by AI. Many of us don’t realize the risks when we share pictures online, but there are some real dangers to consider:
- Deepfakes: AI-generated content like deepfakes can damage your reputation or invade your privacy.
- Metadata: The photos you upload contain hidden data (like location) that can be used by others.
- Intellectual Property: If you share photos you didn’t create, the original owner’s rights could be violated.
- Facial Recognition: AI tools can link your face to other content, including deepfakes.
- Online Memory: Once something is posted online, it’s nearly impossible to fully remove it.
Before you post pictures, especially of yourself or loved ones, think about how they could be used by others, including AI bots.
Stay Informed, Stay Safe
It’s up to all of us to be mindful of how we use social media and protect our digital privacy. If you found this article helpful, please share it with friends and family to raise awareness. Also, leave a comment below to let us know what steps you take to stay safe online.