The Dangers of Deepfakes
By: Gia Han Nguyen
There has been a lot of technological progress happening globally, especially with the rise of generative AI, a type of AI that creates media such as videos, photos, and text. As AI gets better at making media look more human-made, women’s livelihoods are more in danger with the rise of AI creating non-consensual porn of women with their real faces. Laura Bates’s book, The New Age of Sexism: How Emerging Technologies are Reinventing Misogyny, highlights many deepfake porn online mainly consisting of female celebrities and random women and girls’ faces. Unfortunately, there aren’t many laws that directly protect women, and existing laws have loopholes that do not prevent people from making non-consensual porn of women.
Because of the abuse of using AI to harm women, it can absolutely feel sickening, traumatic, and isolating thinking about the possibility of having AI-made non-consensual porn on your face or any woman’ you know without explicit consent. By bringing more awareness to this global crisis that is affecting not only adult women but young girls by men and boys with Internet access, we need to address that deepfakes are harmful and can ruin people’s lives.
What is deepfake?
According to Encyclopaedia Britannica, a deepfake is media created by AI that portrays things that do not exist in reality. The term ‘deep’ credits AI’s use of deep-learning technology to learn different aspects of reality and make media as realistic as possible, and ‘fake’ addressing that the content is not real. A deepfake includes videos and photos, but it also creates audio and text that seem human-made. To create a deepfake, people use two contrasting AI networks, with one creating the media from a real-life reference and the other determines if there are errors between the real media and AI-generated media and adjusts until the AI-generated media looks real enough and cannot detect errors.
One of the notable examples of a deepfake includes a 2023 AI-generated video of Will Smith eating spaghetti with a fork made with ModelScope. Not only does the video exaggerate Smith’s face while chewing and placing spaghetti on his face, but also multiple figures of him appear and his facial proportions change. As AI eventually learns by using clips how people eat spaghetti and how Will Smith eats food, the media that AI produces becomes more realistic. A current version made by Google’s Veo 3 (an AI-generation tool) portrays Will Smith eating spaghetti without changing his facial proportions or how he eats spaghetti.
(Credit: https://commons.wikimedia.org/wiki/File:Will_Smith_Eating_Spaghetti_Original.webm [Bottom picture], https://commons.wikimedia.org/wiki/File:Will_Smith_eating_spaghetti_Google_Veo_3.webm [Top picture])
A lot of AI-generated media now is more realistic than before, and there have been reports of people mistaking AI-generated content for reality because of how “real” AI media looks. Despite this, Bates reported from her book that many companies support the use of generative AI for profit and do not make active efforts in stopping people from using AI to scam and harm people. With the power of AI, what can they not do?
How many women have been affected by deepfakes?
Apparently, corporations are not enough to stop people from creating non-consensual deepfake porn of women. With a quick Google search, anyone can access make deepfakes for free or by paying others to make them to hide their identity. Because of accessible deepfake generation websites, many men have been using it to create non-consensual porn videos or images of female celebrities or real women they know by providing the AI generation website a picture. 1.8 billion women and girls online do not have legal protections from digital violence (e.g. cyber harassment or stalking, image-based abuse, doxxing), and 90-95% of online deepfakes are nonconsensual pornographic videos and images with 99% of the people presented in such media are women.
The exact number of women affected by deepfakes is unknown, but there have been too many examples of women’s livelihoods harmed. Reports of cybercrime against women in India have risen to 80,000 in 2026, a 60% increase from 2024’s count of 50,000. Celebrities like Collien Fernandes, Taylor Swift, and Scarlett Johansson spoke out about the number of deepfake non-consensual porn that use their faces. Deepfake cyberbullying is also a rising problem within schools. A notable case in Almendralejo, Spain consisted of 15 teenage boys using a generative AI website called ClothOff who created and shared deepfake non-consensual porn of 20-30 female classmates ages 11-17 by using their social media photos and making them nude. Even though the boys were charged with probation, this is not an isolated incident affecting teenage girls. Not only has this situation happened similarly in New Jersey, but it drove a 14-year-old girl to die by suicide in London.
What laws are there?
Laws from governments are still catching up with the harm that deepfakes create for women, and many of the current laws present have loopholes or weak enforcement. In the U.S., Melania Trump helped pass the Take It Down Act in 2025 and recently celebrated her first victory on convicting a man that made deepfake non-consensual porn of six female victims. Internationally, the UK Online Safety Act in 2023 does not permit people to share digitally manipulated media, but it does not prevent creating them. Europe’s AI Act in 2024 requires transparency from deepfake creators to tell the public what they made is AI. Mexico’s Ley Olimpia have enforced punishing digital violence. The UN encouraged not only governments, but also social media companies to prevent people from easily creating and sharing deepfake content that are used to share misinformation, fraud, and harm.
There is no group or organization that focuses on the fast removal of harmful deepfake content, and many organizations have not been adhering to their standards of protecting women. Despite Google actively taking down websites that show images of non-consensual deepfake content online, searching for websites to create deepfakes is one Google search away. Social media sites like X (formerly Twitter) have mentioned they have a “zero-tolerance policy” for non-consensual nudity, Musk has allowed his AI chatbot, Grok, to create images and eventually, profit off of creating harmful content against women and girls.
What can we do?
Help and support is available, despite how slow laws are. For anyone affected by cybercrime, please report to the Cyber Civil Rights Initiative (CCRI) or Stop NCII (Non-Consensual Intimate Image). CCRI is an organization specializing as a crisis helpline for people affected by cybercrime like non-consensual deepfake content, sextortion, doxxing, and more. Stop NCII is a tool to remove online non-consensual images and its duplicates on the Internet with the help of participating companies. To learn more how Stop NCII’s technology works, read more here.
If you feel worried and need a person or group to talk to about the trauma you faced, whether it relates to cybercrime or not, consider joining counseling sessions or support groups. Because we understand and value the safety of our clients, our services can be done in-person or virtually to ensure you feel safe physically and emotionally. Join our community and subscribe to our newsletter for empowerment pieces, news about our different support groups, and our upcoming events. You are never, ever alone online and in-person.