How Employers Can Take a Stand Against Deepfake Pornography

3D rendering Wire frame Cyborg, black and white

Photo Credit: akinbostanci

In last week’s blog, “ChatGPT, AI, and Why Machines Shouldn’t Be Behind Sexual Violence Policy and Procedure,” we discussed how the faulty nature of AI should not be trusted to write workplace policies. We argued that its history of communicating sexist and racist tropes, its incomplete policy writing, its factual inaccuracy, and lack of human expertise makes it a technology incompatible with progress in this area. However, we also know that AI can be a tool explicitly used to cause sexual harm as well. This week, we will be continuing our AI discussion by diving into the issue of deepfake pornography and how workplaces can both support victims and take a stand against this form of image-based abuse.

“Deepfakes” are images, video, or audio that were modeled from real images or sounds but depict fake situations and scenarios. Some deepfakes are easy to identify, but others can be incredibly convincing. When real people are used as the reference point in these deepfakes, it can cause real financial, reputational, and emotional damage. Some horrendous uses of AI in recent years include someone making a deepfake recording of a principal saying racist and antisemitic things, swaying voters through either humiliating politicians or spreading disinformation about their stances and endorsements, and even calling a parent to fake a child’s kidnapping for ransom money. These scenarios will only increase in frequency the longer that AI is left unchecked by our legal system, but in our blog, we will be focusing on the harms of deepfake pornography.

Deepfake pornography is the depiction of real people in sexual situations that never occurred. It is by far the most prevalent and concerning use of the technology, as “96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content” as of 2019, and a 2023 study puts that rate closer to 98%. In early 2024, estimates suggested that there are almost 100,000 sexually explicit deepfake videos and images spread across the web on a daily basis across 9,500+ sites. Almost 100% of deepfake pornography videos are of female subjects, with a significant portion of them being underage. This makes the proliferation of deepfakes an issue of gender-based violence, and that should be a concern for everyone.

Deepfake pornography has far-reaching consequences for its victims, including at the workplace. Some victims have been fired. Some have endangered prospects with those videos being online. Many struggle with the emotional fallout of this sexual violation, psychological distress, a sense of mistrust, and a desire to withdraw from public life altogether. Plenty of victims are targeted because of their job (like a journalist reporting a child sex abuse story, a mental health expert speaking before a parole board, or even a world-famous pop star) to make a mockery of professional, successful women. In the United States, there is little legal recourse for victims, but that does not mean that workplaces cannot take their own stance on this issue. Below are some of our tips for how employers can stand by victims and make their views known about deepfake pornography.

Coordinate with Industry Leaders, Internet Safety Experts, and Anti-Sexual Violence Advocates to Facilitate Conversation

In 2024, we were gratified to see companies like Glamour take the issue of deepfake pornography seriously by supporting the first global summit on deepfake abuse. Everyone has the potential to become victims of deepfake pornography, but it may impact different industries in different ways. This is why we encourage businesses of all kinds to collaborate with organizations focused on internet safety and sexual violence prevention to spread awareness and continue these conversations in a foundational, meaningful way.

Have a Zero-Tolerance Policy for Deepfake Sexual Harassment

Another move that workplaces can make is to modify their sexual harassment policy to include deepfake pornography under listed types of sexual harassment. This should be followed by having procedures in place to deal with employees accused of creating this video/imagery. RALIANCE would be happy to collaborate with businesses to modify their existing policies and procedures to encompass this issue adequately.

Make One’s Stance Clear to Tech Companies

We know that the mission to stop deepfake pornography is not possible without the full cooperation of tech companies, particularly search engines that uplift this content, sites that take ads for this content, and payment processing services that facilitate the purchase of this content. Google alone drives 68% of web traffic to sites that host deepfake pornography, and social media sites like Reddit, X, and Telegram have all seen an increase in referral links to deepfake pornography in the past year. While these sexually explicit deepfakes go against existing policies on these platforms, they are not making it a priority to remove this content from their sites (which may have to do with how profitable this image-based abuse is to them). Organizations can sign this petition by #MyImageMyChoice to send a message to tech companies and politicians that the sites responsible for these deepfakes must be blocked. Additionally, they wrote up a letter to Google that employers can send on behalf of their company. Employers might also consider crafting their own language to share publicly about how these tech companies must block, de-index, and refuse service to these sites. Please review #MyImageMyChoice’s list of tech corporations’ failure to disrupt this problem through their existing policy, content moderation practices, and payment processes.

Have Resources for Victims

For colleagues who have been the subject of deepfake pornography, it can be confusing and frightening. Employers can step in for these individuals by a.) conveying that their job security is in now way impacted by these deepfakes and b.) by directing victims to resources that can best meet their needs. Below is a list compiled by #MyImageMyChoice and The Reclaim Coalition to End Online Image-based Sexual Violence:

Get support

988 Lifeline

RAINN

Love Is Respect

Sanar Institute

Chayn

Callisto & Callisto Vault

Content removal

Global resources

The Revenge Porn Helpline

The Cyber Civil Rights Initiative

Take It Down

Stop Non-Consensual Intimate Image (NCII) Abuse

Alecto AI

Legal support

C.A. Goldberg, PLLC

The Reclaim Coalition

Stay Aware of Legal Landscape

While there have been more moves towards anti-deepfake legislation, the path towards legal recourse is hard and varies state by state. Employers should pay attention to the “DEFIANCE Act of 2024” as well as similar legislation in the state they operate. While some states have introduced and even enacted legislation, the Trump Administration has rolled back federal safeguards against AI, so it is likely that the way in which the United States combats deepfake pornography will be an evolving issue. Employers may also consider supporting local, state, or federal legislation that combats deepfake pornography.

As AI becomes more and more accessible, deepfake pornography and other forms of bullying and abuse will increase in prevalence and popularity. It is up to all of us to take a stand against this practice and work to make the internet a place where people are safe from all types of image-based sexual abuse.

 

RALIANCE is a trusted adviser for organizations committed to building cultures that are safe, equitable, and respectful. RALIANCE offers unparalleled expertise in serving survivors of sexual harassment, misconduct, and abuse which drives our mission to help organizations across sectors create inclusive environments for all. For more information, please visit www.RALIANCE.org.


  

Subscribe to Our Newsletter

Subscribe