ChatGPT, AI, and Why Machines Shouldn’t Be Behind Sexual Violence Policy and Procedure

A glowing chip that says "AI" in a circuit board

Photo Credit: Just_Super

With increased access to ChatGPT and AI, research and writing has never been faster and easier. These tools can be great resources to take advantage of in the workplace, but they also come with some serious drawbacks. As an organization comprised of staff who have worked in the anti-sexual violence field for decades, we think it is important to consider the role technology could potentially play in our field. We have seen the ways that human understanding of sexual violence can evolve, the use of technology-facilitated sexual violence, and the value of the human expertise when aiding survivors and creating reforms. This leads us to conclude that AI should never be a substitute for workplace sexual violence policy and procedure creation. In today’s blog, we’ll dive into the harms perpetuated by AI and why organizations like RALIANCE are irreplaceable in this area.

In recent years, AI-facilitated sexual harms have become more and more prevalent. Deepfake pornography, for example, is 96% of deepfake online content and most often features sexually explicit imagery and video of underage girls and women that did not consent to its creation. In an internet economy in which 9,500 sites are used to exploit intimate privacy violations like deepfake pornography, and with major social media companies still failing to curb (or even outright profiting from) these deepfakes, it poses a real moral quandary of utilizing any AI service that allows its technology to be used in such a way.

Of course, the intersection of sexual harm and AI extends beyond image/video creation. For companies considering using AI to draft policy or process harassment complaints for them, there are some things that they should know.

AI Has a Proven Racial and Gender Bias

People may presume that there is no more neutral source than an algorithm, but they would be incorrect. Only 12% of AI researchers are women, which means that, “female input is lacking in the training and development of the AI models at the stages when human-cognitive biases are ingrained into the models’ reasoning and internal perceptions of the world.” This has led to real-world consequences, like AI recruitment tools favoring resumes from men over women. AI itself struggles to “think” of women or people of color in leadership, has more trouble distinguishing between people of color, and AI flags African American English as more offensive than other English dialects. With bias so deeply entrenched in the technology, it’s wonder why we are encouraging employers to exercise caution when using it to write policy or process complaints. These policies and procedures must be equitable to all staff, which is why a human should always be present to combat these biases.

AI Struggles to Identify Harmful Behaviors

While AI bots have been found to mimic derogatory speech to others at the instruction of users, they have also been victim to harmful behaviors themselves. AI technology (including voice bots) are often coded as “feminine,” embodying gender stereotypes including submissiveness. Men have been found to use chatbots to create AI partners, verbally abuse them, and share those interactions with their following. In an analysis of voice-generated responses to sexually suggestive, derogatory, or aggressive speech with responses ranging from dismissiveness to polite subservience, gratefulness, and even flirtation. While the programmed responses have improved over time, none of them show a forceful rejection of the misogynistic speech nor do they educate about why such speech is wrong. By struggling to determine the scope of harm committed against itself, it’s clear to see that AI is likely not trustworthy to assess similar sexist behavior leveraged against real people by itself.

AI Needs to Be Fact-Checked

While AI is built to summarize content spanning across the internet, it still has the potential to grab from untrustworthy sources or even manufacture false ones. For example, one law professor was identified by a chatbot using a Washington Post article as having sexually harassed a student on a school trip. Except…there was no school trip, no complaint had ever been filed, and that Washington Post article didn’t even exist. The disinformation is running rampant, and one mayor even considered taking legal action against ChatGPT for defamation. Princeton Science Professor Arvind Narayanan referred to ChatGPT as a “bull**** generator,” and had this to say about the technology behind it:

“It is trained to produce plausible text. It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal.

This actually matches what the philosopher Harry Frankfurt has called bull****, which is speech that is intended to persuade without regard for the truth. A human bull******er doesn’t care if what they’re saying is true or not; they have certain ends in mind. As long as they persuade, those ends are met. Effectively, that is what ChatGPT is doing. It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not.”

When a technology doesn’t prioritize the truth, it becomes an unreliable tool for workplaces to research practices in their own industry, employee history, and much more. Instances like the one against the previously mentioned law professor also contribute to casting doubt on real sexual assault claims. If AI is used, it must have a second look by staff to ensure its accuracy before it can be implemented for educational purposes or safety protections.

AI Used for Complaint Processing Has Legal Ramifications

The legal landscape surrounding AI is still in its beginning stages, so employers should also familiarize themselves with what’s currently on the books. Employees reserve the right to privacy and to work in an environment free of discrimination. AI has the capacity to violate both of those rights through overreaching surveillance and its previously mentioned biased programming. According to law firm Lipsky Lowe, Title VII of the Civil Rights Act could be violated by AI. They say, “Employers must ensure that AI tools do not inadvertently perpetuate biases or lead to discriminatory practices. AI algorithms must be carefully designed and regularly audited to comply with these federal regulations​.” They also caution that, “Inaccurate data analysis or biased algorithms could lead to wrongful accusations or privacy violations.” Both inaccuracy and bias would further complicate how employers can deal with workplace sexual harassment and discrimination, whether it’s facilitated by this technology or if this technology were used to analyze an existing complaint. While the Trump administration has rolled back federal safeguards against AI and removed publications from federal sites that provide guidance surrounding AI and protections against it, employers can still view the Equal Employment Opportunity Commission’s (EEOC’s) Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 through Wayback Machine for guidance.

AI’s Policy Writing is Often Incomplete

AI can create a good starting place for writing, but that doesn’t mean it can create all-encompassing, comprehensive policy. Forbes has reported about how some AI-written employee handbooks didn’t have anti-harassment policy covered at all, which would put an employer through serious legal and financial risk. VP of Engineering at Iris, Ben Houghton, said, “While AI tools can help whip up a first draft of these documents in seconds, their underlying training datasets are often out of date when it comes to constantly evolving labor laws.” Without proofreading AI, employees and their employer are at a disadvantage when understanding current employee policy and employee rights. This is why AI-generated drafts must be looked over by a person with expertise before being distributed.

RALIANCE and Other Anti-Sexual Violence Organizations Can Evolve and Perceive in Ways AI Cannot Understand

Finally, it must be said that real-life advocates understand the nature of sexual violence better than any machine can. Our movement has listened to survivor stories for decades, has advocated for their needs at every cultural and community level, and has continued to grow in its understanding of the issue. AI is limited to pre-existing data and conversation, but we are ever evolving. RALIANCE will be updating its taxonomy because there is much more to understand. Whether it’s adding further context to a sexual violation during a pandemic, learning about the intersection of racial microaggressions and sexual harassment, or a dimension of sexual violence we have not yet encountered, we remain committed to personal evolution to understand and combat workplace sexual abuse, assault, and harassment in a way that AI cannot.

With the speed of technological advancement, this will likely not be the last time that we discuss AI, its role in the workplace, and how it can potentially impact survivors who need protection. However, we hope that this blog has demonstrated that human expertise is irreplaceable and vital to create and maintain safe and equitable workplaces.

RALIANCE is a trusted adviser for organizations committed to building cultures that are safe, equitable, and respectful. RALIANCE offers unparalleled expertise in serving survivors of sexual harassment, misconduct, and abuse which drives our mission to help organizations across sectors create inclusive environments for all. For more information, please visit www.RALIANCE.org.


  

Subscribe to Our Newsletter

Subscribe